path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
ICCT/ENG/examples/02/TD-03-Mechanical-systems.ipynb | ###Markdown
Mechanical systems General mass-spring-damper model> The mass-spring-damper model consists of discrete mass nodes distributed throughout an object and interconnected via a network of springs and dampers. This model is well-suited for modelling object with complex material properties such as nonlinearity and viscoelasticity. (source: [Wikipedia](https://en.wikipedia.org/wiki/Mass-spring-damper_model "Mass-spring-model")) 1/4 car model> 1/4 car model is used to analyze the ride quality of automotive suspension systems. Mass $m_1$ is the "sprung mass", which is one-quarter of the vehicle mass that is supported by the suspension system. Mass $m_2$ is the "unsprung mass", which is lumped mass composed of one wheel and half-axle assembly, plus the shock absorber and suspensison springs. The stiffness and damping of the suspension system are modeled by the ideal spring constant $k_1$ and friction coefficient $B$, respecitvely. Tire stifness is modeled by spring constant $k_2$. (source: [Chegg Study](https://www.chegg.com/homework-help/questions-and-answers/figure-p230-shows-1-4-car-model-used-analyze-ride-quality-automotive-suspension-systems-ma-q26244005 "1/4 car model"))--- How to use this notebook?1. Toggle between *mass-spring-damper* and *1/4 car model* system by clicking on a corresponding button.2. Toggle betweeen *step function*, *impulse function*, *ramp function*, and *sine function* to select the function of the force $F$. 3. Move the sliders to change the values of the mass ($m$; $m_1$ and $m_2$), spring coefficients ($k$; $k_1$ and $k_2$), damping constant ($B$), input signal amplification and initial conditions ($x_0$, $\dot{x}_0$, $y_0$, $\dot{y}_0$). Mass-spring-damper 1/4 car model
###Code
# create figure
fig = plt.figure(figsize=(9.8, 4),num='Mechanical systems')
# add sublot
ax = fig.add_subplot(111)
ax.set_title('Time Response')
ax.set_ylabel('input, output')
ax.set_xlabel('$t$ [s]')
ax.grid(which='both', axis='both', color='lightgray')
inputf, = ax.plot([], [])
responsef, = ax.plot([], [])
responsef2, = ax.plot([], [])
arrowf, = ax.plot([],[])
style = {'description_width': 'initial'}
selectSystem=widgets.ToggleButtons(
options=[('mass-spring-damper',0),('1/4 car model',1)],
description='Select system: ', style=style) # define toggle buttons
selectForce = widgets.ToggleButtons(
options=[('step function', 0), ('impulse function', 1), ('ramp function', 2), ('sine function', 3)],
description='Select $F$ function: ', style=style)
display(selectSystem)
display(selectForce)
def build_model(M,K,B,M1,M2,B1,K1,K2,amp,x0,xpika0,y0,ypika0,select_System,index):
num_of_samples = 1000
total_time = 25
t = np.linspace(0, total_time, num_of_samples) # time for which response is calculated (start, stop, step)
global inputf, responsef, responsef2, arrowf
if select_System==0:
system0 = control.TransferFunction([1], [M, B, K])
if index==0:
inputfunc = np.ones(len(t))*amp
inputfunc[0]=0
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==1:
inputfunc=signal.unit_impulse(1000, 0)*amp
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==2:
inputfunc=t;
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif index==3:
inputfunc=np.sin(t)*amp
time, response, xx = control.forced_response(system0, t, inputfunc, X0=[xpika0,x0])
elif select_System==1:
system1=control.TransferFunction([M2, B1, K1+K2], [M1*M2, M1*B1+M2*B1, M2*K1+M1*(K1+K2), K2*B1, K1*K2])
system2=control.TransferFunction([B1*K1*M2**2, B1**2*K1*M2, B1*K1**2*M2 + 2*B1*K1*K2*M2,
B1**2*K1*K2, B1*K1**2*K2 + B1*K1*K2**2],
[M1**2*M2**2, B1*M1**2*M2 + 2*B1*M1*M2**2,
B1**2*M1*M2 + B1**2*M2**2 + K1*M1**2*M2 + 2*K1*M1*M2**2 + 2*K2*M1**2*M2 + K2*M1*M2**2,
2*B1*K1*M1*M2 + 2*B1*K1*M2**2 + B1*K2*M1**2 + 5*B1*K2*M1*M2 + B1*K2*M2**2,
B1**2*K2*M1 + 2*B1**2*K2*M2 + K1**2*M1*M2 + K1**2*M2**2 + K1*K2*M1**2 + 5*K1*K2*M1*M2 + K1*K2*M2**2 + K2**2*M1**2 + 2*K2**2*M1*M2,
2*B1*K1*K2*M1 + 4*B1*K1*K2*M2 + 3*B1*K2**2*M1 + 2*B1*K2**2*M2,
B1**2*K2**2 + K1**2*K2*M1 + 2*K1**2*K2*M2 + 3*K1*K2**2*M1 + 2*K1*K2**2*M2 + K2**3*M1,
2*B1*K1*K2**2 + B1*K2**3,
K1**2*K2**2 + K1*K2**3])
if index==0:
inputfunc = np.ones(len(t))*amp
inputfunc[0]=0
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==1:
inputfunc=signal.unit_impulse(1000, 0)*amp
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==2:
inputfunc=t;
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
elif index==3:
inputfunc=np.sin(t)*amp
time, response, xx = control.forced_response(system1, t, inputfunc, X0=[0,0,xpika0,x0])
time2, response2, xx2 = control.forced_response(system2, t, inputfunc, X0=[0,0,0,0,0,0,ypika0,y0])
ax.lines.remove(responsef)
ax.lines.remove(inputf)
ax.lines.remove(responsef2)
ax.lines.remove(arrowf)
inputf, = ax.plot(t,inputfunc,label='$F$',color='C0')
responsef, = ax.plot(time, response,label='$x$',color='C3')
if select_System==1:
responsef2, = ax.plot(time, response2,label='$y$',color='C2')
elif select_System==0:
responsef2, = ax.plot([],[])
if index==1:
if amp>0:
arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-((amp*0.05)/2)],color='C0',linewidth=4)
elif amp==0:
arrowf, = ax.plot([],[])
elif amp<0:
arrowf, = ax.plot([-0.1,0,0.1],[amp-((amp*0.05)/2),amp,amp-(amp*(0.05)/2)],color='C0',linewidth=4)
else:
arrowf, = ax.plot([],[])
ax.relim()
ax.autoscale_view()
ax.legend()
def update_sliders(index):
global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider
global x0_slider, xpika0_slider, y0_slider, ypika0_slider
m1val = [0.1,0.1,0.1,0.1]
k1val = [1,1,1,1]
b1val = [0.1,0.1,0.1,0.1]
m21val = [0.1,0.1,0.1,0.1]
m22val = [0.1,0.1,0.1,0.1]
b2val = [0.1,0.1,0.1,0.1]
k21val = [1,1,1,1]
k22val = [1,1,1,1]
x0val = [0,0,0,0]
xpika0val = [0,0,0,0]
y0val = [0,0,0,0]
ypika0val = [0,0,0,0]
m1_slider.value = m1val[index]
k1_slider.value = k1val[index]
b1_slider.value = b1val[index]
m21_slider.value = m21val[index]
m22_slider.value = m22val[index]
b2_slider.value = b2val[index]
k21_slider.value = k21val[index]
k22_slider.value = k22val[index]
x0_slider.value = x0val[index]
xpika0_slider.value = xpika0val[index]
y0_slider.value = y0val[index]
ypika0_slider.value = ypika0val[index]
def draw_controllers(type_select,index):
global m1_slider, b1_slider, k1_slider, m21_slider, m22_slider, b2_slider, k21_slider, k22_slider
global x0_slider, xpika0_slider, y0_slider, ypika0_slider
x0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$x_0$ [dm]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
xpika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{x}}_0$ [dm/s]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
if type_select==0:
amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,
description='Input signal amplification:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)
m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,
description='$m$ [kg]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k$ [N/m]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.1f',)
b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,
description='$B$ [Ns/m]:',disabled=False,continuous_update=False,
rientation='horizontal',readout=True,readout_format='.2f',)
m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,
description='$m_1$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,
description='$m_2$ [kg]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,
description='$B$ [Ns/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_1$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_2$ [N/m]:',disabled=True,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$y_0$ [dm]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{y}}_0$ [dm/s]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
elif type_select==1:
amp_slider = widgets.FloatSlider(value=1.,min=-2.,max=2.,step=0.1,
description='Input signal amplification:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',style=style)
m1_slider = widgets.FloatSlider(value=.1, min=.01, max=1., step=.01,
description='$m$ [kg]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
k1_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k$ [N/m]:',disabled=True,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.1f',)
b1_slider = widgets.FloatSlider(value=.1,min=0.0,max=0.5,step=.01,
description='$B$ [Ns/m]:',disabled=True,continuous_update=False,
rientation='horizontal',readout=True,readout_format='.2f',)
m21_slider = widgets.FloatSlider(value=.1,min=.01,max=1.,step=.01,
description='$m_1$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
m22_slider = widgets.FloatSlider(value=.1,min=.0,max=1.,step=.01,
description='$m_2$ [kg]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
b2_slider = widgets.FloatSlider(value=.1,min=0.0,max=2,step=.01,
description='$B$ [Ns/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.2f',
)
k21_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_1$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
k22_slider = widgets.FloatSlider(value=1.,min=0.,max=20.,step=.1,
description='$k_2$ [N/m]:',disabled=False,continuous_update=False,orientation='horizontal',readout=True,readout_format='.1f',
)
y0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='$y_0$ [dm]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
ypika0_slider=widgets.FloatSlider(value=0, min=-1, max=1., step=.1,
description='${\dot{y}}_0$ [dm/s]:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
input_data = widgets.interactive_output(build_model, {'M':m1_slider, 'K':k1_slider, 'B':b1_slider, 'M1':m21_slider,
'M2':m22_slider, 'B1':b2_slider, 'K1':k21_slider, 'K2':k22_slider, 'amp':amp_slider,
'x0':x0_slider,'xpika0':xpika0_slider,'y0':y0_slider,'ypika0':ypika0_slider,
'select_System':selectSystem,'index':selectForce})
input_data2 = widgets.interactive_output(update_sliders, {'index':selectForce})
box_layout = widgets.Layout(border='1px solid black',
width='auto',
height='',
flex_flow='row',
display='flex')
buttons1=widgets.HBox([widgets.VBox([amp_slider],layout=widgets.Layout(width='auto')),
widgets.VBox([x0_slider,xpika0_slider]),
widgets.VBox([y0_slider,ypika0_slider])],layout=box_layout)
display(widgets.VBox([widgets.Label('Select the values of the input signal amplification and intial conditions:'), buttons1]))
display(widgets.HBox([widgets.VBox([m1_slider,k1_slider,b1_slider], layout=widgets.Layout(width='45%')),
widgets.VBox([m21_slider,m22_slider,k21_slider,k22_slider,b2_slider], layout=widgets.Layout(width='45%'))]), input_data)
widgets.interactive_output(draw_controllers, {'type_select':selectSystem,'index':selectForce})
###Output
_____no_output_____ |
Uninformed_Search_problems_ICE2_Irfan.ipynb | ###Markdown
Solving problems by SearchingThis notebook serves as supporting material for topics covered in **Chapter 3 - Solving Problems by Searching** from the book *Artificial Intelligence: A Modern Approach.* This notebook uses implementations from [search.py](https://github.com/aimacode/aima-python/blob/master/search.py) module. Let's start by importing everything from search module.
###Code
from search import *
from notebook import psource, heatmap, gaussian_kernel, show_map, final_path_colors, display_visual, plot_NQueens
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
CONTENTS* Overview* Problem* Node* Simple Problem Solving Agent* Search Algorithms Visualization* Breadth-First Tree Search* Breadth-First Search* Depth-First Tree Search* Depth-First Search* Uniform Cost Search OVERVIEWHere, we learn about a specific kind of problem solving - building goal-based agents that can plan ahead to solve problems. In particular, we examine navigation problem/route finding problem. We must begin by precisely defining **problems** and their **solutions**. We will look at several general-purpose search algorithms.Search algorithms can be classified into two types:* **Uninformed search algorithms**: Search algorithms which explore the search space without having any information about the problem other than its definition. * Examples: 1. Breadth First Search 2. Depth First Search 3. Depth Limited Search 4. Iterative Deepening Search 5. Uniform Cost Search*Don't miss the visualisations of these algorithms solving the route-finding problem defined on Romania map at the end of this notebook.* For visualisations, we use networkx and matplotlib to show the map in the notebook and we use ipywidgets to interact with the map to see how the searching algorithm works. These are imported as required in `notebook.py`.
###Code
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from matplotlib import lines
from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
import time
###Output
_____no_output_____
###Markdown
PROBLEMLet's see how we define a Problem. Run the next cell to see how abstract class `Problem` is defined in the search module.
###Code
psource(Problem)
###Output
_____no_output_____
###Markdown
The `Problem` class has six methods.* `__init__(self, initial, goal)` : This is what is called a `constructor`. It is the first method called when you create an instance of the class as `Problem(initial, goal)`. The variable `initial` specifies the initial state $s_0$ of the search problem. It represents the beginning state. From here, our agent begins its task of exploration to find the goal state(s) which is given in the `goal` parameter.* `actions(self, state)` : This method returns all the possible actions agent can execute in the given state `state`.* `result(self, state, action)` : This returns the resulting state if action `action` is taken in the state `state`. This `Problem` class only deals with deterministic outcomes. So we know for sure what every action in a state would result to.* `goal_test(self, state)` : Return a boolean for a given state - `True` if it is a goal state, else `False`.* `path_cost(self, c, state1, action, state2)` : Return the cost of the path that arrives at `state2` as a result of taking `action` from `state1`, assuming total cost of `c` to get up to `state1`.* `value(self, state)` : This acts as a bit of extra information in problems where we try to optimise a value when we cannot do a goal test. NODELet's see how we define a Node. Run the next cell to see how abstract class `Node` is defined in the search module.
###Code
psource(Node)
###Output
_____no_output_____
###Markdown
The `Node` class has nine methods. The first is the `__init__` method.* `__init__(self, state, parent, action, path_cost)` : This method creates a node. `parent` represents the node that this is a successor of and `action` is the action required to get from the parent node to this node. `path_cost` is the cost to reach current node from parent node.The next 4 methods are specific `Node`-related functions.* `expand(self, problem)` : This method lists all the neighbouring(reachable in one step) nodes of current node. * `child_node(self, problem, action)` : Given an `action`, this method returns the immediate neighbour that can be reached with that `action`.* `solution(self)` : This returns the sequence of actions required to reach this node from the root node. * `path(self)` : This returns a list of all the nodes that lies in the path from the root to this node.The remaining 4 methods override standards Python functionality for representing an object as a string, the less-than ($<$) operator, the equal-to ($=$) operator, and the `hash` function.* `__repr__(self)` : This returns the state of this node.* `__lt__(self, node)` : Given a `node`, this method returns `True` if the state of current node is less than the state of the `node`. Otherwise it returns `False`.* `__eq__(self, other)` : This method returns `True` if the state of current node is equal to the other node. Else it returns `False`.* `__hash__(self)` : This returns the hash of the state of current node. We will use the abstract class `Problem` to define our real **problem** named `GraphProblem`. You can see how we define `GraphProblem` by running the next cell.
###Code
psource(GraphProblem)
###Output
_____no_output_____
###Markdown
Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
###Code
romania_map = UndirectedGraph(dict(
Arad=dict(Zerind=75, Sibiu=140, Timisoara=118),
Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211),
Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138),
Drobeta=dict(Mehadia=75),
Eforie=dict(Hirsova=86),
Fagaras=dict(Sibiu=99),
Hirsova=dict(Urziceni=98),
Iasi=dict(Vaslui=92, Neamt=87),
Lugoj=dict(Timisoara=111, Mehadia=70),
Oradea=dict(Zerind=71, Sibiu=151),
Pitesti=dict(Rimnicu=97),
Rimnicu=dict(Sibiu=80),
Urziceni=dict(Vaslui=142)))
romania_map.locations = dict(
Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288),
Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449),
Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506),
Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537),
Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410),
Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350),
Vaslui=(509, 444), Zerind=(108, 531))
###Output
_____no_output_____
###Markdown
It is pretty straightforward to understand this `romania_map`. The first node **Arad** has three neighbours named **Zerind**, **Sibiu**, **Timisoara**. Each of these nodes are 75, 140, 118 units apart from **Arad** respectively. And the same goes with other nodes.And `romania_map.locations` contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in `romania_map`) between two cities in algorithms like A\*-search and Recursive Best First Search.**Define a problem:**Now it's time to define our problem. We will define it by passing `initial`, `goal`, `graph` to `GraphProblem`. So, our problem is to find the goal state starting from the given initial state on the provided graph. Say we want to start exploring from **Arad** and try to find **Bucharest** in our romania_map. So, this is how we do it.
###Code
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
###Output
_____no_output_____
###Markdown
Romania Map VisualisationLet's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named `romania_problem`. Have a look at `romania_locations`. It is a dictionary defined in search module. We will use these location values to draw the romania graph using **networkx**.
###Code
romania_locations = romania_map.locations
print(romania_locations)
###Output
{'Arad': (91, 492), 'Bucharest': (400, 327), 'Craiova': (253, 288), 'Drobeta': (165, 299), 'Eforie': (562, 293), 'Fagaras': (305, 449), 'Giurgiu': (375, 270), 'Hirsova': (534, 350), 'Iasi': (473, 506), 'Lugoj': (165, 379), 'Mehadia': (168, 339), 'Neamt': (406, 537), 'Oradea': (131, 571), 'Pitesti': (320, 368), 'Rimnicu': (233, 410), 'Sibiu': (207, 457), 'Timisoara': (94, 410), 'Urziceni': (456, 350), 'Vaslui': (509, 444), 'Zerind': (108, 531)}
###Markdown
Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
###Code
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in romania_map.locations.keys()}
node_positions = romania_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in romania_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in romania_map.graph_dict.items() for k2, v2 in v.items()}
romania_graph_data = { 'graph_dict' : romania_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
###Output
_____no_output_____
###Markdown
We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function `show_map(node_colors)` helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book. We can simply call the function with node_colors dictionary object to display it.
###Code
show_map(romania_graph_data)
###Output
_____no_output_____
###Markdown
Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements. SIMPLE PROBLEM SOLVING AGENT PROGRAMLet us now define a Simple Problem Solving Agent Program. Run the next cell to see how the abstract class `SimpleProblemSolvingAgentProgram` is defined in the search module.
###Code
psource(SimpleProblemSolvingAgentProgram)
###Output
_____no_output_____
###Markdown
The SimpleProblemSolvingAgentProgram class has six methods: * `__init__(self, intial_state=None)`: This is the `contructor` of the class and is the first method to be called when the class is instantiated. It takes in a keyword argument, `initial_state` which is initially `None`. The argument `initial_state` represents the state from which the agent starts.* `__call__(self, percept)`: This method updates the `state` of the agent based on its `percept` using the `update_state` method. It then formulates a `goal` with the help of `formulate_goal` method and a `problem` using the `formulate_problem` method and returns a sequence of actions to solve it (using the `search` method).* `update_state(self, percept)`: This method updates the `state` of the agent based on its `percept`.* `formulate_goal(self, state)`: Given a `state` of the agent, this method formulates the `goal` for it.* `formulate_problem(self, state, goal)`: It is used in problem formulation given a `state` and a `goal` for the `agent`.* `search(self, problem)`: This method is used to search a sequence of `actions` to solve a `problem`. Let us now define a Simple Problem Solving Agent Program. We will create a simple `vacuumAgent` class which will inherit from the abstract class `SimpleProblemSolvingAgentProgram` and overrides its methods. We will create a simple intelligent vacuum agent which can be in any one of the following states. It will move to any other state depending upon the current state as shown in the picture by arrows:![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyEAAAGuCAIAAAAI9Z6AAAAgAElEQVR4nOydd3xUVfr/z53eJ5UkQADpXaqixoKhiSgElCK9BFlBRRRdy1fWggIKC7uUZSGw0rHAUoSlCyJVOiIEQoBACGnT596ZO3Pv74/PL+c1G0gW4qTBef+R18zkzp1z63nuUz4PJ0mSz+fT6XSyLHMcRwiRJIm+ZjAYDAaDwWCURCAQ0Gg0hBC/369WqzmOs9vtERERhBAuGAwqFIpAIKBSqQgh+fn5MTExPp9Pq9VW8qgZDAaDwWAwqgOiKHIcB1OKECIIgk6nI7Isy7KsUCgIIXq9vlJHyGAwGAwGg1G9MRqNhBBZllUej8doNMK7xfM8XUKpVFbe8BgMBoPBYDCqARqNxu/3I8+KEBIREWG32xEM5PARsq8QSsR3gsFg5Q2YwWAwGAwGoxrg8/nUajU1n5xOZ82aNT0ejyzLKkLIrVu3VCqVSqWSJAlL2Gy2yMjIShsvg8FgMBgMRnXAYDDgBVLbLRaLx+NBCpbK7XbHxcUFAgGO4wKBgFardblckZGR1N5iMBgMBoPBYNwRURRVKpVSqVSpVLSCEIbXf8UKZVnmeV6v13u9XmqXMRgMBoPBYDBKAqnteO33+6Ojo91utyzL/1+7gdpY1ARjMBgMBoPBYPxPJEkKlcHiOI7jOEmSFJU9MAaDwWAwGIz7EGZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxGAwGg8FghB9mYzEYDAaDwWCEH2ZjMRgMBoPBYIQfZmMxHmgEQRBFkRASDAYJIX6/3+/3V/ag7kNkWSaEBAIBSZLwid/vp6/viN1up68lSfL5fIQQj8dDCBFFESvEwQoEAiWtRJIkekAlScKS+C4jjJTh+DIYDwLMxmI80Oh0OrVabbPZFAp2LZQjsiwLgqBSqQghwWAwEAhoNJrS97larQ4EAg6Hw+/3KxQKrVZLCDEajTzPq9VqjuN4ntdoND6fD6u9IwqFQqPRBAIB2NAqlUoQBGZjhZ0yHF8G40GgxHsTg/Eg4PF4jEajyWTiOI4QotFoCCE+n0+tVlf20O4rFAqF3W6Pj4+XZVmpVOJDv9+PHX477du3v3Hjxq1btzBVC4Kg0Wjq168/adKk1NRUSZIUCoVeryeE4MCVBH6Czv2EEAyDuVjCy70eXwbjAYELBoMKhQL3KVmWA4FAKQ+FDMZ9hizLOPkLCgqioqJcLpfFYqnsQd3P+P1+juMUCgWdiUti3rx5EyZMePrpp7dv367RaLZv396zZ0+NRrNu3brk5GRZljUajSiKd2MNB4NBSZLwlTBtB+PO3P3xZTDuJ/DgRy0ojuM4jpMkiflyGQ80HMe53W5CSHR0NMdxFosFbxnhJSsrixDi8Xg0Go1arQ4Gg/AqlULDhg0RLtRoNB6Pp1u3bpMmTeJ5/pNPPlGr1RqNRhAErKr09eC38BWkc2EwjDBShuPLYDwIMJcV40HHZDLh4QNOkWAwyPM8sn8Y4SIxMZEQYjAY8Faj0WAaLsnbIYpiIBAQRREZ68js0ev1Op2udu3adCWEEKVSSZ2Rt4OZnrqvMIDExEQWKwwv93p8GYwHBGZjMR5o3G63yWRCxFytVtvt9oiIiMoe1P3JpUuXGjZs6Ha7NRoNx3Glx/gQ2lMqlX6/XxRFvV6/bt26zz77rG/fvl988QWy6JCnpdPpgsFgSRkOmONRh+j3+00mE4bB0rHDzj0dXwbjAYHZWIwHGhhY8Fp5vd7IyEir1epwOCp7XPchGo3G6/WaTCZCCPQySsmm0mq1LpdLrVafPn06JibG6XQajcZvvvlmwIABGo0mPz/faDQGg0GdTld6Shb9r0ajgXOlRYsWTJ6jPLin48tgPCCwhzlGtcTr9ZLbhI54ni+2mCAIVDyJKirhuxS4Oux2u8FgUKlUMLCQ+U4jhgaDoTyiHmq1Wq1WK5XKYqEuONXo32LfwkgwmdEXVTAoEzpOjuP8fj8+CQQCarVakqTSJ2CDwSAIwlNPPZWenv7oo4/6/f59+/ZBiAG+RqVSKUlSaOoPzaXjeR7RQIVCERpJhGMMr6vFfiOVNM5iP6dQKGhpVLFl6DglSVIqlSgvwJHFUYZLki4DnTNCSDAYFAQBr91uN65THJ1ikdxS9M8YjCoO82MxqiUGg8HpdFosFoSNCgoKNBqN2WzmeV6n0924cePcuXMbNmw4d+7c4cOHeZ4fPXp0165d27dv37BhQ1qPZrfbtVqt1+uNjo7GtI27uV6vdzqdhBBRFI1GI8dxHo8H80S4pjqr1VpYWIjHfUxdSqVSq9UGAgGIN2KawV+tVhuqAhUIBBQKhdvt1ul0hBC3261QKELjZYFAoLJMh2LDoOOkM+td4nK5oqOjCSGSJMXFxX322WfdunVbvHhx06ZN3377bUIIoroKhQI7weVymc1mOt9D1sHlcun1ehT4ULVStVqt1+v9fn+12G+kko5vMBjUaDRwExYzWO84TqVS6fF4kICFXY3F8IQQCAS8Xq9arXY6nfv379+3b9/f/va3yMhIm8321ltvPfbYYykpKSqViud5HBpY0iqVKicnJyoqilWDMqoxwWCQPmTIsozEBQajiuNyuWRZhoo3/TAQCMiyvGDBAo1G87e//S0zM/PWrVuyLAeDwd9+++3vf/97SkpKSkrKoUOHZFn2er34Fs/zPp+P53k8VVPnCtJ4QymPJB7UYRX7UKlUGgwGs9lM3+r1etgNoGbNmqQo/kXfgmJLVjAljZOqw8hFNxnceUpCFMW9e/cSQtq2bSuKoiiKq1atIoRwHLd79266mNvtlmUZ5wB1meTl5fl8vtBbmc/noz+HgVWX/VZZ4yx2TqrVapVKxXFcSeOEzRd6UCgFBQWyLOfk5EyYMKFz585z5869cuUKLmFBEA4ePPjNN98QQubMmZOfn+/z+ZxOJz1q9Aq92/sCg1FJ4A5DbzuEEI7jZFlmNhajuoJzFbLdubm5wWAwIyOjcePGc+fOzc3NlWUZBhbmXfqVvXv3PvXUU0OGDOF53uv1BoPB0HOeFHmq9Hr9xYsX6ef4lTBC5yGYd/RzzCuho8IV6nA4Qo0St9sdHR2NUGN0dHTorBYMBh0OR3hHe/eUNM57tbFkWU5LSyOEJCcnY+eLojhs2DCNRvP444+fO3eOLoZjjZ/GCxxx7FX8liRJgiDwPO9yuUhRKLZa7LfKGmfoVYO3KPMsaZyEEJ1OB+8g/lVYWGiz2XA+f/HFF0lJSd999x3P8/n5+bIsX716VZZlu90uyzLP8w6HY/Hixa1btz527Jhc9PwTCAQyMzPluzhVGIxKh9lYjPsNnucRwgMHDx5s3rz5b7/9Fmq14Cbu8/lwZ5eLfF1Lly594YUXfv75Z/qsHHohmEwmk8mEz6m7S5blwsJCKUzcvjnQjKBvIV6AAWMzJUnyer3oqCj/dyKaLMv43Ov1YuXhGmcZtuuO47xXG6tFixaEEKPRiK9v375dkqT9+/dTccvly5cHAgEc3/z8fLpLEdjC62IGAUAArrrst8oaJ/aV3+/3+Xw4Dymlj1MuciuCo0ePdunS5e2336alncUOB316cTgcly9fbtmy5XfffSfLclZWVimnB4NR1SjJxmI674zqCvI2kOgzb968tLS0vXv3mkwmh8MRFxeHfji0vJ8QEggE3G53RESELMter/e33357++23P/roow4dOkRHR3u9XqVSWWwCpi8g8VB625Z7AtErtHmRZdnlcul0Oo1Gg0kIFXB+v58KDt0Ox3G4VDEFhmtgYQfjDA3V4SYjSVLpsVfaYFilUikUisLCwqioKJ/PFypdFggEXC5XZGSk3++HtwwrP378eL169aKiovArSK/GAlqtVpZllUpVLfYbqaTje7u0FQ7H7alRt48TqXKSJO3YseOLL76YMWPGo48+6nQ6zWYzJH91Op1KpcIho+c5DpzX6x05cmSXLl1SU1Pz8vJiYmLweRgvPQajPJBK0HlnNhajWoLEcJhEP/300wcffLB582az2YywBTKg0dZDrVbDWYWF0blWpVK5XK7CwsIvv/yybdu2Q4YMMRqNoihqNBqTyeT1eg0GQ0FBAVKksU65ZKHL8IJfxwu/3x8RESGKolSU/E6KbBR0LCFFUlL0yoXRWVkaqqG/HjpO+rRH7s7GKvbfW7duxcXFIckaNpPL5aJKZsU0ArKzs5s0aeLz+Zo1a9a1a9cuXbo8+eSTcIkJgqDX6zG1V4v9Rirp+IqiSPc/znyYXCWNEzaTy+VC2YHP51uyZMnq1av/9a9/xcbGIrMwdHIJPb64WlG8Qgix2Wzjx49/4YUXBg0ahAXovxiMKktJNhaLFTKqMTzPnz59ukOHDtevX0ciCFJV8vLy5JAwBI1QeDyeYme43W5/7bXXPv/8c5vNJssyCZm96Ldg4uBtMEzQsSE+KAjCtGnTcHFGRkbOmjUL/y3mf5ZDUrVCL28sExoXC9c475WSxlmGfCyoj8qynJOTU+xzGM0+nw+Hm6Ze0cgvLRcghCgUCq1W26JFi8mTJ+/evZuEtJGu+vutssaZkZExbtw4/HpqaioS4EKDhrePE95ESZKCweDMmTNHjx6NyzD0K3ghCIIkST6fz+fzIRZPE7BwBAVB6NChw88//4x8eQaj6hNksUJGdSRYFLPAUwLu4Gq1WpZl1I5169YtLS0NigxlWD/8Q6+88orJZEJBIv05URShwIQHbvkP+LF8Ph/yrENfI/6Ih/jU1NTGjRv37t27UaNGTqfznXfeeeyxx4YOHUoIUavV8NPc8TmJFBku5K5jcOXNHceJqO6GDRscDkdERIQgCIjZBcPU1Y7n+ZiYGI/Ho9fr7Xb7iBEjEEoOlRvFnvz/Nz6Oqxb7jZT/8cWO8nq9KpUKLR1v3rw5cODAWbNmPfroozzPnz179oMPPli0aFGTJk1KGqdarUaGllqtXrRo0eHDh+fOnRsaeb97sObMzMxWrVqdOXMmKirKarUSQnw+H84irJa6exmMqkBJfiymQcqo0tDpBC8QKiKEcByn0+mmTp366quvNmjQAAZWMXHRu8Tv969cuVKj0bz33nuEFoOECdgQ8I0hjV2lUmGyN5lMNpvNbDZDpGDChAmNGzd2u91Wq/Wjjz6aMGECzRm6D8SyEWMymUwWiwWGAgr9wvUQGRERwXGcy+XKz883GAzFDCycM6IoxsfHy7LMAk8U2hHSYDDgXDUaje++++7nn3/evn17rVYbERGBYPquXbtKWY/f769du7YgCDNmzLh06dKMGTPKJhwqiiKmqLi4uDVr1sycOdNqtUJGDnFJPGwQQvCgVYafYDAqlCCLFTKqNqHBFPpaFMUtW7YMGjQIIbwyz9a01DwvL69v377QvcQjCOwb+tPSnYoB/yd0YMFgsFh9Fv1Xt27dfv31V1mWs7Oz5aIQ5zvvvLNixQosgE/u6IsuQwyuvLnjOAkhVqvV7XZTcYrw/qjb7XY6nTRWiB/VaDR6vV6hUDRs2HD06NFpaWnXr1+PjIwkZYpdljeVeHwFQaDn2PHjx59//nm5KH6HH83KykpOTi4sLCxpnJhBvv3228GDB7tcLhoBL9tg6Ou33npr8eLFeE0vH3qUw66owmCUmZJihSwsyKjqyLJMk3+RhE4I4Xk+LS1t4sSJeFvmbhtwaYiiGB0d/cMPP3AcV6tWrRs3boQrvV2r1SLmGBrfQewPIY/ff//d6/W2bdvW4/EkJCQQQjiO8/l8I0eOPHDggM/n8/v9odlF1Re32w0fHgmZlcMFShaw8ry8PIPBULt27YcffnjAgAHNmjVr3rw5IQQ1iTabjRWphUILb6HacPbsWWRiwa0VDAZVKlXt2rURpi9lPUqlcuPGjWlpaTQjXi6Tn0mWZYRdBEGYMGHC0KFDR4wYgai9UqmkExgpH01gBiO8sHOUUdWRJAmTIsrK8PrgwYPR0dFJSUmID6KaibY/u1fkolwrlUoVRgOLEMJxXDAYlIoa40CyAR8SQn7//feZM2eOGDHC6/XC2vN4PLSRyL59+/x+P+0PU62BSoVWq8W+hZuQOiT+OMi2JoT4fL7Y2Njz58+fPXv222+/TUlJad68uSzLdrsdAWXM05W8O6oMHo8HBhY01g8dOjRs2LDGjRvLsowTT6PRYHd16tQpOzu7pPUgdj9jxgw8VPj9flTv3ut4gsGgTqfDhaxSqerXr9+/f/+VK1fiLdLp1Go1nqnugxg6476H2ViMqo4cUpuGuzbP8wsXLvzTn/5EipqiwGQpW46tIAhYid1uJ4QgFztMYyeEELmoxRvSSpRKJQSfsrKyxo0bd/LkydD0IKPRyPO8Vqtt3Ljx8uXLISkEdfJqDSwqv9+PaC8hhJpcYYEWKxBCZFlOTEz0er2CICDZCBMzzLtKbJhTBUHauCAIMTExy5cv79GjByGkcePG9HEFSYSCILRp0+bChQslrQfOp4SEBJfLpVar6XNC2UaFQykIgiiKqampaWlpbrebK9KPoLUvZVs5g1GRMBuLUW2gpk96ejrP823atEGGLHUUlY1AIID7dURERCAQCG+xUujAICJACMnNzT1y5EidOnWmTZv2yCOPNGvWDD4DDIOqDdFcnPsgRxsbDksIm4nu1+HSGiCEQDdLq9XSnGidTkctZvxFs5fK3BFVDJ7nCSGyLA8bNuzEiRPr169/+eWXA4EALFGE6bEnIc1Q0nqUSiXP83a7HWdyaETvnlAqlaIo4mxBqaNOp0tISDh69CgpOsQkfK3ZGYzyhtlYjKoOTA0kixBC7Hb7xYsXhw8f7vF41Go19NmpYHQZ1q/T6ZRKpcPhwFubzRZGgUfa+4VOOQUFBVu2bJkwYUJubu7hw4cXLFgAkWtCiFKpRM4QfAAqlQqxxfsg7wRzOcDmQClAGSYEQSi2lwwGA0JLcB/SyBepPKHRKojBYLh582afPn2SkpLee++9r776Kj4+nuq204uOEIK+kCWtB7sU1Z3IeFOr1aEH/e5B5wNBEGrUqIFrf8yYMcePH0dpKq7xYgmODEaVhZ2mjMoB3v5Qhz/uyIWFhXhL/wX7A94dQkhERERaWtrjjz+OOz5qzpEPWwaJLHyREGK1WgOBAF6XYqvR5376CabwUoDnBpnshJC//OUvBw8e3LFjx5o1a86ePavRaCIiIuCNI0UP6Mj5bdiwodPphKV1r9tV1bg9xe2PuB5vB2Fi7D3q5EC+DrUY6MJls8WrFIhrB4NBnBsFBQWlL0+vJnq64ivHjh1LTk5+7bXXUlJS+vbt+9BDD6HukhSVmFBTpn379idPnixp/aHmFEo0JEkqW1gWwV+ahh8MBpOSktavX+90OgkhCoXC4/FgsdzcXLpF2A/3wZXCuM9gNhajQqF5HnDwYDrEhIFZEDdoWktIihJo/H4/FkhPTz969GhUVFSlRNAwGDqloc6xlPATsukVCgXP8zqdbtiwYQaDYf78+YsWLfrtt98WLFgA2UasCsERQojP53M6ncOGDbPZbF6vl/ldGMVAHyF4cGVZhuYIjI87AhursLCQJoxHR0d/9913KSkphw4datmyZa9evebPn//cc89RoVGNRqPVamlOlSzLlaL5icvHbDZHR0fn5ORotVqj0QhdiRo1ahBC0KkJ1wgNMlb8OBmMO8JsLEaFQm/Z6A1CCMnNzT1//jzHcXFxcRzHRUVFcRyXlZWFmQAP04QQjUaDR9tgMDh06FCj0RjqGqmwanwMJiYmxul0ykWiWaWU/tFayLNnz7Zs2fKll16aPn36qFGjrl69Om3aNGQHw6zkeZ7OYWibeP78eSg8VcymMaoXsiyHlhG4XK5SnjqQhB4VFUUIQVh8/vz533333YULF3bu3Dl69OgZM2a0b99ep9OFxgeDwaDH44GPyuv1htf1WNJGhb6FgaVQKHr06PGf//wnPj4edqTBYGjSpAk8XlqtVqFQoGX7M888s3r1aqb/zqg6MBuLUaHAxsKdFE6sGjVqNGzYELOFwWBITU2VZblWrVrItYK7iwY4XC5XVlbWU089hbewwKhUYwWgVCq9Xm8wGLRYLChn43leKhkkl6xbt27MmDGLFy/u1avX0KFDW7ZsOWvWLDyO04BpREREfn4+AiJ+v1+hUKxevZqmpDAYoSDVTKvV6nQ6RJNphO6OUP10p9MZHR09cODAjIyMb7/99vjx419//fWUKVOefvpptVrtdDphssCUUSqVRqMRVr4oihVg7hdLloeNpVQqTSYTbgK0t3d6evq///1vtVqdnJwsCMLVq1e3bt26d+/esWPHLlq0qLzHyWDcJUyDlFHRIMcitK47Jibm1q1bUEKvVasW0tiVSiWeR1GzjfZkZrM5Ozsb/TTgQwrNfpX/QEvBu4TjOCRT+3y+s2fPZmRkREVFuVyuklLBLBbLzp07L126tHPnzsjIyH79+r3wwguDBw/G8n6/X6/XazSaYDCoVCqRmCUXtTQWRdFgMDidTovFUq4bxah20C6WhBBBEAwGw61bt/bs2VOSCwf+1EAgoNVq//rXv44YMeL5558/cuTImDFjTpw4AfMFPtqIiAi73Y5YJCEE3dA5jsvLy4NGbrlyx+uX47inn3560aJFXbp0gdtYp9OhHFgURUTYo6Oje/To8fbbb8+cOXPVqlWpqanlPVQG425gfixGJYDwmSRJLpfL6XSKooh7OtrMGQwG5FUUFBSguEmhUNDJIy8vr1GjRni6hdAUPg+GdIUqV9xut1qtPnPmzIQJE2ALiqIolMD169dv3boFYcYXX3yxa9euo0aNUqlUSBmBaFNOTg6ysEOdfC6XCy+YgcW4HZztiOKho/OHH37ocDhKOg85jkMdbiAQ2LNnT926dX/55ZeJEyfu3btXp9NROStEBiMiIuAhdjqdCoUCCwiCEB8fX97bVZJP2uFwaLVarVZL4/KiKJrNZkTVcfmgjQkhBD2kGYyqAPNjMSqU0Oo5pLLic4fDAYlRn88nCAKUC5DJC8OLECIIQmFhYUZGRt++fcl/P/JiyqkA1Ryfz6fT6a5evTps2LDt27fXqlXLbrfHxMSUtLwoii+//HKXLl1q1aqVlpY2fPhwOOT0ej3P83q93uVyYeq6efMm/AQKhcLtdp85c6Znz55YSUFBAXYFgwEQp8ZrQRDWrl2rVqv79OkTFxd3x+XxoAIX8i+//PLEE0906NBh06ZNUVFROA8JIfBy+f1+dNdRKBS01Xp6evrp06efeOKJCtvAYlit1tzcXI7jZFkWRRF5iqIoiqKYlZWl1+udTuepU6dmzZr1wgsvzJ07t7LGyWAUg9lYjAqFmkF49ESnDmRi4e6pVCpDy7bVanVoKX7NmjVjY2NhqCEDF1VFiLVVwPi1Wq3D4UhNTV2wYEFiYqJCoYiJiSlFcRHCXQsXLhwxYkRycjICoIjy6PV6WZYHDhyYmpq6ePFig8FA66FUKlUgEDh16hTkspiBxSgGx3F49vD5fNu3b9+8efPKlSuRsX7H5RUKhcvlMpvNCoWiffv206ZNCwQCkZGReJiBURUIBNq2bTtq1Khly5Zt2rQJBg0620RGRrZr1+6zzz6r4M2kNGjQYMmSJbNnz4ZTDdc7sgUuXrwI05AQsnLlyldeeaXM3UsZjLDDbCxGRQPjiT6FI3UXObYKhQKpIX6/n7bqQzIWGt4Fg0Gn04k0FBolDH2mDxc0tQsOAFLUqpnn+Y8//njHjh316tU7fPhwXl6ey+WqXbt2SfINSILR6XQ6nW7w4MGjRo1KTk6uW7cuIUQUxfT09DVr1qxcufLJJ58cOXJkRkbG2rVrDx06NHr06F9//XXGjBkrVqygLXsZDyAQMcHZyHGcIAgIo0Nbwe1279+/f/jw4U888cSnn35aSocZg8GQn58fGRmJvzk5OVu2bDl9+vTEiRMbNGgAEQSdTjdr1qw2bdr07NnTaDS2bds2NTU1KSnpnXfekSRpxYoV27Zt6969e4VufxG4S9CWlKGdxZOSktauXdu9e/fjx4+fPn365ZdfDk1Wg4QYqzRkVBbMxmJUNCjGpm9DxUiRzI5oGs0RQdK3JEn5+fkxMTGIslXMOGHSoeI9Kytr7969Q4cO/eqrr06dOtWoUaOMjAy9Xq/X67Ozs7HM7eTn59epU8ftdvfv3z8hIWHJkiXDhg2bM2dOq1atFArFnDlzjhw50rp1a+SaDBgwwGQy7dmzR5Kk5OTkL774YufOnV26dGFm1gOLSqVC1rlGo0FqFD5XKpUnTpyA/Me3337bunXrS5cu1a5duyR/KgpKrFarzWaLjo6WJGnAgAHNmzcfMGDA66+/npKS4vF4Ll68ePHixZ9++sntdvfq1atbt27p6envv//+Sy+9JIpiz549U1JSKsvGcjgcVNwENxD6FITbwpdfftm9e/fp06fHxcW99dZbkA1DbQ2+hUoaJjXHqGCYjcWoHFCspFarccdEpA9PnHgAxdszZ874fL5GjRpFR0cj7akiRbqVSmVUVJTT6dy0aVNaWtqePXsWLlxYp06d8+fP//777z6fD7FCm81WUqQyPj5+8+bNCQkJXq/32LFjrVu3zsnJadu27dq1a/v3779o0aLPPvtMq9W6XK6uXbsOHDjwtddeg4lpNBpbtmwJKSO6QxgPGpAmUSqVNPsQ8iULFixYvnx5586de/fuXVBQsGvXrmbNmh04cKCkUwVNANVqtcPhwNnldDpPnjw5duzYwYMH913wilcAACAASURBVOrVa+3atVlZWa+88gohxGQyFRYWPvXUU9OmTRs1ahS8sGaz+fDhwxW35f+N1WqFAgV6VKvVao/HI4qiyWSSZdntdnfr1m3hwoXjxo2bMmVKq1atkpKSdDodJMRUKhXP88zAYlQKrK6QUaEUE7/BJ3a7/caNG1C+sdlsNFyYmZn55ptvwsBCjoXD4YiMjKwApw7HcS6Xy+Px2O327Oxsk8k0atSof//735icCCFqtTo2NhZOOOSI3JHr16/HxMRIksRxXL169WRZbtas2bZt2/x+/7Vr1z7++ON169YRQoxG49KlS0eOHLl69eorV64gJjJt2rQ2bdpUTC4/o8pCn0CAx+MpKCgYOHDgvHnz+vfvj/OKEHLx4kXogNwRk8kE2V6kivM8bzKZeJ6PjY3dtGnTsGHD1q9f/9RTTy1btgynXzAY/OSTT+bOnbty5UpBEGRZ3rlz58CBAytrJ/A8L8uyWq2mRcdOp1On07nd7ujoaKhnQZBCkqTJkyfn5OQQQlBHCXktnU7ncrkqa/yMB5fQinckHcsMRrmB+3UoeXl5x48fR+pV6IMmElCioqKOHz+OsxRfnzJlyunTp8M1HvQ6xGtk3JMiRa7bF/b5fPRb0FYAoa+LQbe3oKAA8wQFSvG9e/dOSkoaMGDA2LFjA4HATz/9pFKp3n///QYNGnz77bd2uz10qHLIFUpCCt3p+OlilUhVG2dVG09J3HGcCoUCwh88z9PzJz8/Py8vL3TAfr//bn6CbinO5EAgYLPZ8C+8WLBgwSOPPPLhhx8SQn7++efc3Nxx48aNGTNm+PDhSUlJGRkZJY2zAvYnZiustqCgYNiwYbhR4Nls3bp1sixnZWURQlQqFcdxf//730PvNmxqY5Qrd7wuZFnmgsEgxIpwhSADpvxNO8YDjSzLtBmzz+crLCy0WCy4V1KlBkIIzka5KPECj6STJ09+44036tSpE5aRwMNE16/T6dC/GRnuHMdBIJvmkPn9fkmSqCPNbrdbrdbShU9Dax4xTSKKIRcVVF64cEGSpGbNmmEZj8dz5cqVWrVqUR1I7Adk39MrtNjEVmyxsOycslHVxlnVxnNP45RlWaVSIT6OGFnoV/CEQIoSGQOBQCnFH4FAQJZlrIGmhBNCUG9ICMnOzq5Zs2ZOTk5BQUH9+vWpqvuFCxdUKlVERATqWytrf2Jv0AsQ3m7sHDyb2Ww22tA6VJACj0wVVnrMeDC543UhSRIzpxiVAJRFCSFIkkhISJAkCQ+dtAM0IQTNPTweD4SjBEEwmUxKpdJms4XLxioJiAlpNBrclz0ej0KhwNhgYHk8Hq1WCzPI4/GU0ioOS2IawBqQcQWZH0JIrVq1EOzw+/2QY23RokV+fj4pmv8qoE8co8oCK4EU6YDgPk6K2gDg/JRlmcpZlQT05/BaqVSiOg9KnniMiY2NJYTEx8fHx8e7XC6YKVeuXEGX6MoVRKD1khBuICGbQ41FOPzgxNLpdPDt4bqj+5DBqGCYjcWoUBCPo7dFdIaGSA8qpwghaI6BukKLxWKxWHw+H5IqAoGAz+ejT6vlB3UJeL1etVoNEwofCoJAP0HmSikGFjaBFDmJZVmmbZ55nrdYLLRviVqtxm4pLCy0Wq0xMTFerxch1LArUzCqF3g4RqSMXj74i/MHPh5YWndcA/y1pEj4AIshEAmLH1p0Ho8HIlvwbHm9XiR7IXNcLv9eVSVBPc0qlQpOKb/fT4susU+oQAMtwyzmuGJRGkbFw3LeGRUKvb/Tt2hqS0L0riDcQAjBbZ0QotVqcZOFaBbEtCoAQRA0Gg1tj4hxIn+LKqCq1Wq32y1Jks/no1/EQz+dGgkh8BvTNoWkqEkODQhSoqKiMDcYDIb/OaWFziKIHFF/Ay3ARCkWXhBCbDYbISR0tPicfkI7cN/H/jPEj+gGogk3KeqpTAgRBAH/QokDKXKl0Lcej0cuShLCJygCJf9d2PHHx0lCmixpNBo6ZozQbDbToSoUCgjLYQChGwK/LMJqOAG0Wi1E3QwGA/zKsiwbjcbQM4paLVqtNrS2seLB8VKpVDg50VwLTRh1Ol2xECq9e9ABwzJjBhaj4mHnHKNKU0wIFLrwFXCvx88hLBiag4UMKkKI2+1Gibvf7798+fKlS5dq1KihUqkeeeQRo9GIuzmSReD6QgYJnq3DWENOZ1ykkdGAEZXUonHMwsLCffv2BQIBURSTkpJq1qzpcDgsFgvN8qHJNLTCv3Lzk8oV+HVwXoVOvbBFCgsLEcyVJMlisVy5cmXv3r0cx0VERPTo0aOwsNBsNlPnJYwSQRCsVmtOTk58fDzNECpXcHzdbrfVaoWLFGK8169f/+WXX3Jzc2NjY5OTk9EXgW6mz+eLjo52Op1ms9lkMkE3C6PFuSpJUhXUCsnPz+/YsSN8V4QQZHMym4lR9blv76GM+wbE42BXoT6DOlrKD/TQpQOgSe4ajcbj8dBPoqKiJk+evHDhwsuXL+/YsWPhwoVpaWlKpTIQCMBJoNFobDab3W6/dOlSfn4+3Hhh9HPAxkIUCVMjPsHwRFGEKZCVldWxY8d9+/YdP3783LlzPXr02Lt3L7L1MRiv1ytJ0rVr186dO2e3200mU2gXo/uS24+CJEl5eXkKhSIqKgreEbfbfevWrcceeyw9PT07O/vHH3/U6XRKpRLnBtZgt9sJIWfOnDl8+LDVaq2Y3Gr422iBCGqaVCrVxo0b+/bte+zYMY/Hc+zYsa5du+bk5IS6oARBcLlcHMehSwEsfrji8BfmWnmP/16x2WydO3cmRQ8/1JHMYFRxmI3FqAbQbAxJkkwm0x11bmhZU7h+ka6N9k8kRVYLvAKIDW3fvl2j0Tz00EPx8fFGo7GgoIAQolKpkEolCML69eu7du367rvvDho06I033rhx40Z4/XAoe9RoNHDMUANUEAQaN7RarTdu3LBarS1atEhISED8yO/3U8vM6/V+8MEHvXv3/vzzzx955JF//OMffr///u77RmtFqX6NJEmxsbG3bt0ihKB3pNVqPXz4cN26dSMiIurVqxcdHd2qVSsk0kFEVxCEiIiIIUOGTJ06NS0trXHjxlOnTq0A/wrCgsjt43leo9HgotDr9TVr1qxVq5bZbNZoNFeuXLl27RohJPSSGTRoUO/evRcvXtytW7elS5eSorAy7dRe3oMvAwUFBbGxsfSSREpW5Q6JwbgbmK+VUQ2gAQKVSlWvXr3c3Nw7RgzDGEaUZRlGEroyy7LsdrstFgtquxCVwwK3bt36+uuvz549K4riM88889xzzx0/fnzFihUKhWLQoEHx8fHLly//v//7v4YNGzqdziVLlhw+fLh3797hdXWgboAQQuM+SGQmhHg8Ho1GYzAYTpw48fXXXxcUFBQWFq5ZsyYxMXH9+vWHDh2KiYkZOnTopUuXsrOzZ8+ejQ386KOPkpOTGzZsGMZBVjVCbSxCCGJPPM/HxcUhEEwIUalUnTt3NpvNS5cuzc3NrVGjxtKlS+12+65du44cOdKwYcOBAwd+//33oiiOHz9ekqRRo0aNHTt2xIgR5V33SlsTBgIBFABGR0c7HI6uXbsWFBTs378fDwanT5+OiYmZOXPmtWvXunbt+uyzz86dOzcxMbF///4+n+/ZZ58dPHhw7969aWmFz+eDsV6ugy8DSIND4QjcclUwoMlg3A6zsRhVGholJEWprM2aNdu0aVOvXr0QN6T/De/EQJ+SaWwCKepr1qxZtWqVz+ebOXNmy5YtnU5nenp6SkoKx3F16tQJBoM2m619+/Zz5sw5f/58x44dN2/eDAn77OxsQghqoMJoYKGpIkrWaU79zZs316xZs3Xr1latWv35z382Go2XLl3iOO6rr76KiIiAQPauXbveeOON9957b8eOHRs3bhw9enRERIRer79586YkSTk5OQ9OEVaxIzJr1qxVq1Z16dJl2rRpkPqsXbv21KlT4+Li7Ha7xWKZOXPm/PnzX3vttY0bNy5duvTNN9+Etoharc7MzERiVnmPGadlIBD4+OOPjx071qZNmzfffLNmzZoXLlzo1KlT+/bt69ev7/f79Xr9mDFjzpw5k5KSMnLkyA4dOvTo0QNVeKIoQpTOaDRSewVpapVYP1gSdru9YcOGOp0uEAhUTLobgxEWHoh7KKP6Ao8RCUl7r1mz5q5du959911SlJ4VunB4f53mg+Onv/vuu08++WTmzJmLFi1q1arVhg0bPv744w4dOgiCcPDgwddff33ixImTJ0+eOHFi06ZN27VrFxcX9/3336empg4aNGjw4MEnTpxo27btY489FsYRhio9IrXZ4/GsWLHi4MGDffr0mTlzJsKCR44ceeihh0RR/O2335YtW9apU6devXrNnz8/OTm5UaNGq1atgkDXoEGDXnjhhYULF3799dfNmzcvXffrvoEaWA6H45NPPrlx48aoUaOWLl369ttvezyea9euoSXl77//vnbt2vbt23/00UcLFy7s1KlT/fr1N23ahI4u77//fosWLdauXZuWllYBNpbP5zMajd27d3/00Uc7dOiwevXqunXrrlu3TqlU1qhR4+rVqzqdbuXKlQUFBVu3bl20aJFGo9m2bdvYsWNbtmy5c+fOKVOm1K1bd9myZatXr4abFmlk8H75fL6q1oN8z54977//PiGEFuqSsPqtGYxygum8M6oHiNn5/X6e52vWrFlQUAD3wx9MHylJ512pVIqiiJgRftrj8XTu3HncuHEJCQmBQGDYsGEOh2PlypV16tSBoNfXX389bty4I0eOnDt3rlevXhaLZc2aNampqd26dTtx4oTZbOY4rkGDBuS/Vbbvaah31AFXq9VoeYvVXrhw4eWXX547d24wGLx27dr06dMbN248YcIEh8NRr1698+fPjxkz5vz584MHD27fvn2DBg0aNGgwbdq0JUuWNGrU6Ny5c4IgPPTQQ1AgK5tUd1XTVS9pPLRVMBbjef769eujRo1asGDBtWvXLl++vHPnTovFMmzYMJQ4HD169F//+tfRo0f1ev348eO7du1qs9n++c9/HjhwgBBy5swZhUIBeXQaqA3LOEvabz/99FNaWtr48eO9Xu/vv/++bNmyrl27du7cGW6q1atXq9Xqd9999+WXX27ZsiViiKtXr969ezfHcb/99hshJDExEeJzkDnA+v/nQamU42u1WjMzM/V6vV6vD20FwWBUEUrSea+K6Y0Mxu1otVqoVWm12r/85S9nz56FOg4hpLCwEMtQoaCSQP0dbVBI2+YQQtxut0qlQgyCSmnDi4MPL1++HBsbW69ePb1er1AorFZr//7969atC18Xx3HdunXbsWPH5MmTExMTR40aNXLkyO7du7dt2/bjjz9u165d69atz549i2LD8KaS0DaL0Fw4fvz4s88+i+rL3Nzc6Ojo3r17Q847EAjExsYOGTJEoVAsWrTIZrNNmTKlV69eq1evtlgsjz/+eOvWrQcMGICkb7fbHcY5UpZllFsSQlQq1e3Fa6FJ2RWQa48Yq1Kp5HmeECJJkl6vT0tL69y5s16vDwQCFy9eDAaDL774IsrufD5fp06dGjRokJmZiaq9Pn36zJ49e+3atZcuXWrbtm3Hjh3HjRsHdf4yGFj3iiiKq1ateuKJJ9xuN4oEjxw5kpycrFQqzWZzTk7OW2+9NXv27IiIiMWLF1+5cmXgwIHTp0/fuHHjnj17OnXq9PDDD3/yyScOh0MQBGhfhZYolmE8sixHRUX5fD5YbJApwb+g2XZPq8UJAB0vQsiBAwcGDhyo1Wppgj8+Z2nvjKoPs7EYVR2qq4l7q06na9eu3datW/V6PTp+REVFoZS9lAAHjCrq+oJpFVqdhLRfzLgoaEe/Qvx0YWHhsWPHkpOT0fVz+vTpH3744Y4dO06fPo0cEUmStm3b9uyzz1osltmzZ8uyfOPGjTFjxqxbt27Hjh379+9fsGDBBx98sH//fp7nQ/U//yChZlAgEDAajVu2bOnZsyeKChcsWDB06NAjR47Uq1dPFEW73Z6fn79s2TKe55s1a7Zw4UJ0Ak5ISBgyZMjAgQMPHDgwadKkZs2anTlzBjskXJhMJq1WS+daaEISQqhARmhQsgK0D2inP+jcKhQKQRB27tzZpk2bkydPKpXKbdu2PfXUUz/99FNOTk5iYiIchJmZmc2aNWvUqFFaWposy0eOHDEYDEOHDn311Vf/85//vPLKKykpKenp6eU9eEKIWq1etGhRkyZNBEG4efPmkSNH2rVr99tvvzmdTqi3z5o164MPPtDr9U2bNt24caMoihkZGW63+9133x04cODGjRvbtm3bs2dPlUqFwkM0olEoFGUTRlGpVIWFhTRNSq1WU6MZRbi050/p66HivZIkRUZGZmVl+f3+s2fPPvnkkzhDqCgJKYfcAAYj7LCwIKMagLo5KEtxHNexY8cuXbq8/fbb6OWHNFiDwUDLwW7n9sgFbveYYlGsRJWrsE54m9AkMSoqCrNRkyZNvvnmm1atWqWmprZp0+aRRx4ZPny4xWI5cODAk08+2atXL0KI3W43Go0Q4B43btzmzZslSXr44YfffPPNvn37hleknua5i6Lo8/lMJpPD4fj111+bNWv25Zdf/uUvf0lJSTl+/PiYMWO6dOlSWFg4d+7c8+fPN27cmBCi1WqxW86dOyfLctOmTW/cuNG0adO5c+f++c9//uGHH/5nC7y7Bw48agSjN6VSqaSNh1EoBynzCpg74UsjhKjVarhwOI7TarW1a9fOyMh4/fXXv//++xYtWrzxxhsLFy5s1qyZTqdbs2bN+vXr6cIICF66dKl27dqNGzeWJKlBgwadOnX6/vvvX331VbRPLldMJpPX6xUEYcqUKWlpaR07dnzmmWfi4+ObNGly69Ytl8v15z//GVoetNT0/PnzycnJzZo1U6vVHTp0yMjIWLJkydixYwkhUPHw+/1ly8RSq9XUSUkIQTee0NbvlNKTqLCwTqfzeDxKpTIxMRFlsCtWrCAhzXD8fj+cx2UYKoNRkTA/FqOqgxp7KraJEr9FixZ9//339L/wjtAJ+44roXdkFOJhVW63G/0HqQWGroikyEVE4z6wUd5///3GjRvPmzfP4/F07NjRbre//PLLzz333NatW2fNmiXL8s2bN4cPH240Gtu2bbtlyxaVSmWxWLxe761bt1q2bDlkyJAZM2bcUd+rzOCZHolZhJDU1NRz587169fv1VdfHTZsmNlsnj59+qZNmxISEl555RWbzZaQkMDz/N69e5944gmlUtm2bdvMzEyDwWCxWGrVqiUIQt26dT0ez6FDh8KY9WI2m81mMzq9EEJ0Op1erw9dv0qlUqvV0MSnXYDKD5gdMKmVSiVCZhMnTuzXr9/48eM3bNjQvn17q9W6dOnSOXPmdO/ePSkpKSsrKzExUaPRrF271mq1RkZGdu3a9dSpU82aNUPvYY1G079//5UrV5b34Akhdrt93rx58+bNmzp16uLFi5966im1Wr1t27bPP//8iSeeGDFixHfffYeg9g8//KBSqfR6/RtvvHHx4kW/31+zZk2TyaRQKIYPH/7qq69eunRJEARaP0v+O257l8ABDActz/OovUA+Cu09BesqVHmuJCRJggqd3+8/depUq1atatasSULaPeFaZjYWo+rDct4ZVRqqmk2bw8CR4HA4nnzyyW3btmm12qioKFLU3OZ/rpDe66lRBceSXq/PzMykilAKhQI3dJpcolQqjx49arFYGjZs6PV6zWZzZmbmsmXLkM7y2GOPJSUlEUKgQtS+ffu8vLz/+7//i4mJ6dGjR69evex2u0qlysvLW7JkybZt28qwK+6YU4kkJ2g30J1w4MCBBg0axMXFwebLyMjYuHGjRqNp0KBBv379YNxwHJeWlpaYmJidnf33v//92LFjGzZsqFGjhiAIkZGRO3fu5Dhu/PjxZSiSLyk3Hx/SWw3VCHC5XLTfcClWcpm543igBcXzfLGOQxkZGQkJCejPzfP87t279+3bx/N8+/btX3rpJaPRuHPnzs8//zw1NbV+/frr1q07c+ZMVlbWp59+ajAY0E35+++/79ix46hRo8IyTlJyLnlBQUEwGHS5XCikCAQCa9euPXPmTDAY7NixY58+fTQazbp165YuXTp48ODatWuvX7/e5XJt3759zpw5oihCiuLHH3/805/+9PDDDweDQafTqdVq/2cy2R3HaTAYfD5fgwYNvvnmm0cffZR26sTFCyclrVD5n7NMaNHAwIEDv/jii/r16yOAiG4/cIJWWNkEg/E/YTnvjGoJ7ZhBn1mRRKVWq997773169fDwCKEyLJces47ktyxBnprdjqdRqMRRfstW7YkhJjNZkII4oYqlQo1g0ql0ul0duzYsUmTJkgr/v3335s2bapWq81mc0FBwciRI0+dOnXs2DGe559++unY2Njo6OjJkycPHDhw06ZNaWlpmZmZKpVq//79KOkKF3C5YecEg0EYCo8//niNGjXw+datWz/99FOVShUdHb1r164nn3xSo9GsWbNm/PjxzZs3NxqNzZs3j4qKWr16db9+/ebPn+9wOPbs2XPs2DGXyxVeFSKsDTMlV9SGkhCSk5Oj1+sNBoNarcZjXhh/tCQgT0AtaaPRCDumQYMGBoMB58nkyZPnzp1bu3bt2rVrHz16tEePHoQQ5F0lJibKstyrV6+rV69+/vnnH3/88YEDB9xut9Pp3LJlS/369ct7/DzPR0dHw27med7j8UyaNGn//v0mk8lkMu3Zs2f06NE8zx87dqxv376JiYlI0l+0aNHUqVPfeeed69evKxSK/Pz8n3/+GZpeSqUyMjLSYDCU7RB4vV5Zli9cuPDMM88MHTr02rVrMIPwXzib6SVcehucQCBgMBj8fr/L5frhhx+io6MTExNhosHxSdfM2ukwqj7MxmJUaULzPDDz4Q6rVqt79uw5d+7cY8eOIRJhNptLTyWBPyD0Ri8IgsVimTNnTqdOnWbPni0IgkqlcrlcoR4XKoJKb+g0mKVWq69evSoIAgIZRqOxfv36v/zyCyEkPT29Tp06J0+ebNy48e7du5OSkn7++eenn376+vXrK1asCG8/uKioKKRs48nJ4XAgWx+DzMzMVCqVLVq0kCQJJXJer7d58+aFhYWyLDudTo/HYzAYHnnkkatXr7Zv337p0qVvvfWWSqV6/fXXw5ibj1wfZGUpFIq8vLyVK1eOGzcuPj6+bdu2MOmgIABXTbh+txRgqdN4Jc4HmOm0GWVUVFR2drZGo4ENQQjp1KnT+fPneZ53Op2iKJ4/f/6FF15Ax+hPP/00NTU1OTm5bt265T14OPwiIiLwWqfTmc3mixcvQh/L6XRaLBYsk56eHggEIiIicnJyCCGDBg3avXv3xYsX33zzzVGjRnXt2rVnz546nQ5bhzOnDM4hjuOCwSDaNK1atapVq1YzZszw+/02m40+2IR2wilpPehRTQjRaDQ5OTmDBw/+5JNPkHpFvXp0eMyJxaj6sFgho6qDViGkSCILf+GY3bFjx6pVq/76179issG/SloPSskwkSMTy+12jx49eteuXTQBCInqSNQtKChQq9WwS1wuV0REBASo6K8cPnw4IyPj6NGjdevWfemll8xms9Vq/eKLL/bu3TtkyJALFy5s27Zt48aNEAFHFI/neSQelWE/lBSDM5vN+fn5sBWKXb+yLPM8/+uvv27YsKGgoKBv375PP/201WrNy8t79dVXIyIinnzyyUOHDrlcruXLl2Pmo52CaHA2LOOE23zz5s0bN27cvXt3ZmYm7jyQ9bp582Z8fHzo18vwu/c0Hth8mPWLyZXBwvP7/aIo7t+//8SJE4IgtGrVCgUNBQUF3bt379+/f9OmTRcuXNivX79x48bhFpqfnx8TE1PmW+g9xQrh14FQAg5cYWHhyZMn9+3bFwwGUTOo0+kuX77cqFGjKVOmJCQkbNmy5bnnnktNTcXpbbPZoILmcDisVivcwHejPlXKOJctWzZs2DAsFhkZaTabt2/fnpCQgB4JEBm5m4MrCEIwGJw0aVLXrl1feuml0FOCHqyyicwxGOVESbFCZmMxqitOp9NsNn/55Zcmkyk1NbVYKg9EFjBbwFxwuVyIAxJCCgsLv/rqq5kzZyJahJx6ZKhQSw6xRXQAvBvFc0x7wWBw69atO3bsgOEV3r51yNNHGhb+0g9D/1XKGuh0lZubu3Xr1hMnTjz88MMvv/xyuJQaqKfQ7/erVCq0c6FFfIQQdHHBa3xutVo/++yzWrVq4Ubk9XotFgtqReHWog7FMkyr2FGIDNrtdqvV2rt3b6oicff7jRQlAhJC0tPTN23a5PV6W7ZsmZKScq+7qBT++PGlljHdV0ePHt25c+etW7eQKRh6mZRZJ/32cWo0Gp7nZVlOSko6evSo3++3Wq1omt6/f//58+dHR0djflGr1Rgk9ieefEIfXQoLC6Oiov72t78dOXJkxYoV8MmVYZAMRkXCbCzG/YnD4ZgwYcJjjz02cuRIvV6PXB+UCqpUKmgEYF7B8zpu2Z07dz5w4IAoivAQaDQav9+/a9eu5ORkmotts9m0Wm2oL6eUqwNZvXDM0LkkjJsJ/VXawxhiYPSyRV0YTVIpvQI/1FJxOBwajSaMmeah2cqyLOfn59euXRvSGz6fz2AweL1eQggtFIWq5IIFC6KiojiOg9hYIBBAgaHVahVFMTs7OyoqKi4uLj8//17DrKhXQC2q3+83m83JycmkKHR19/sNQTTq4OF5HrHXcN0tw3h8Qaj95HQ6VSoVPS60kLYMjf9KGie1sfbs2fPss8+azWYUJ+JxpVatWidPngwGgziIMTExt28F3HU3b96sVavW1q1b33//fXwF9SX3NEgGo+JhNhbjvgKGgsvlCgQCkZGRw4cPb9Wq1TvvvIM5OLR4jRBSUFBgebK42gAAIABJREFUMBj0ej19Jr527Vrv3r1PnjxJigysF154YePGjWjXA9eL0+kkhNBAISGEKsvfEdgNsA/widPpDJd/CP62YDCIOKksy7GxsRihxWLJy8tDgM/n80FAtaSUJtr8kSYOy7Ls9XrDZWbhLoNsNo1GU1hY2LRpU6ifm83mr7766ty5czt27MjIyIDtK4oiz/OXLl1CcRygViC9HQUCAbfbjaNQBnD7Q/ivVq1a8K+UYb+RkEJX8t8Ny/8g4Tq+uK3TERa7n8MvG6pUcq+pb6WMU6VSeTwejUYzduzYRYsWUTNLoVDs378fbTqLdRm6evUqctdu3ryJkDoh5Mcff/z0009XrVpVv3597N4yh60ZjAqD2ViM+w34pUhREOeVV15JSkoaMmSI0WhElo/T6SwmBZmXlxcbG0sIOXXq1OLFi1euXGmz2Uwmk9vtxjQP+QDaX6XYDHqXsSpZlhHnKteuaqFju3vvDo0V+nw+URR1Ol15X+8YJ7xlN2/ehDGXnZ29c+fOrVu3/vTTT6Iobt26tWPHjohAURFLWKuyLBcWFlqt1j8yTioV4ff7jUZjaFPhe1oJ9h7GplAoyjUfqGzHlxR1mIGZJYpiaBZX6IBp4l24xokUN4/HI0lSy5YtIR9vtVobNWr0zTffNG/enBTlVgqCgJ7WOKa5ubkogxVFcfbs2Tt27Jg3b16jRo0wcuhrhMuWZTDKiZJsrP8fWcdCSGGRGYwqD+q8ZFlGl2jEKSZOnPjKK6+cOHGCLiaKIgroKMFg8IcffrBarevWrTt//vyzzz5LCJk6dSoWDlWIkGWZftfpdKLFXin4fD46qrBjt9uhg+XxeOx2uyzLhJD4+HikimMBj8eDORULVApOpxPa4ihvxDippSvLss/nQ7K5XHS3ycnJweHDJ8FgEG8DgUDo/uR53uVySfeIIAjF7mkQCy3DfgtdFd2EcBH244sbO5CKGnRSQo9CWMZJn9JxyJYsWUIIeeihh/bs2bNjx45nnnnmH//4B64mekzxFIRjJMtyZmbmq6++OnLkyJycHPpzubm5ZRgkg1Hx4IqjtwiYWbIsMz8Wo1oC3xU0nKi7KBAIQIyqcePG77zzTnx8PAoJcR/X6XRbtmyZN29eixYthg8f3rJlS47jTpw48dVXX61atQphRI1GgwweTEvBYBA5Kx6PB0khJcVW0I0HDy70kgsGg+FqDxwa3MHr2+vOQsuvSol5US0MSZIQ91EoFOGKxdxe/3W7P4begOgyWVlZcXFx8FrRDHfclzDIP+7JgHcHQS5kaN3rfgsdOVZVhnymkgjX8aVJTkgQDHVcwXOpUqlCj3UZYoV3HKdOp4NlTIo058aPH9+lS5fnn39er9fn5+fPmjXrxx9/nDJlypNPPhkbG0sPNCpRpk6d+tFHH61YsWLw4MFYPx5pMH65rLn5DEaFwWKFjPsQlPtRo0GWZY/HYzKZ1qxZs3jx4ri4uNatW1ssloSEhBMnTnz66aefffZZ//79a9SogcwelK8Hg0FBEFA2yHFcQkLCzZs3rVZrRkYGHDB3Uy4eWrRYHlOC3+9HhSNiQAUFBS1btoTiUXx8/NmzZzHUYDCIev6SwpShF3jotR8uoC+l1Wqx2hs3biQlJV25cgVJb8iSxvSv1+tD9zz9OuyAiIgIv98fGRlps9kIITNmzJg8ebLdbr/X9GdsIyocsU9oOPhe9xtcm6GGSxjvluE6vndMXaI5ZMXig1QV5Y+PE5J1N2/exIErZi6j3vbatWv/+c9/Xnvttddffz0+Pv7hhx++dOnSjz/+KAjCgAEDhg8fjm6bBoMB9aQ0ylnMWGQwqiDMxmLcV+DGjegenUFx9sISCgaDFy9ezM/Pp70C27ZtSyfagoICpVIJSwseLKwqIiKC9mxG8Tn8vZiVYSjccTx3/Bd3F63Z7hKsn4ogoECP5oPTej0sUMo4SZExCr9OqJJCWMBsqtPpIOgKi4pWEdLkehw+bBF8jaHqGHABfvvtt0OHDtVoNKdPn27SpAnaBZZ5YKH3PlK0o+5pv0GkALnz+KT05e+JcB1fmk2IAStC2pxj8PRw04MSlnHSUlC5SCOUOg7R/Qa/CJW448eP22w2lCA0b978oYcewlBxeSIPEsIrHo8njMWbDEb5wWwsBuMOUJEePJcrlUq9Xk/NrHIFhghVVYViU7FlQu0hQghcMjBBSFhtuPKGSo7dve9k+fLlo0ePFkXxypUrderUCVUioJqWhBB04Lkbbxx+muM4DOYPbE2FAv8ZzJRQ6woN+2DTQGLqjuYyrSQt10GW4fgyGPcTJdlYzJxiPNCgNzAyriCvFQwGIyMjUY5eflCPAowDTPlms9ntdoe6AUKlOzE2VCzS+fV/KmdWEahNA0cXAoKlP87RPKRiwN2F4B32FRWbvSPQklCr1XRfQQGhzNtSiWC/wTuF86dRo0YXL14k/117yHGcwWAQRdHv90uShKeIch1YGY4vg/EgwK4BxgMNnZmMRiPP8xD1Kb23dFigcx5ialqt1uv1Qk8ISTPoTIypq1hsCMYHnFjlPXeGkTp16tD+LQgUlt77iEK3lxTFvEoXKgsF2qFIUUIDmTp16ly7dq0a7bdQEGqA40qn04miePHiReoKhXSW1+sNBALwxSL0Fq6AZumU+fgyGPcxzMZiPNDIskxb5ej1+tzcXCpjXWFA6Cu0G2MgEHA6nREREQqF4vTp0+vWrfviiy9Gjx49YcIEdHdGkLEaOWNkWc7Ly4uMjER1GxpBlv4VGv6jFXaEEJfLRbWyEF2FoH9JK8Gv0B+NjIw8evRobGxsNapTKyws1Ol0BoPBZrMtWLDgl19+MRgMkyZNqlevHtXtJIR4PB4kt1VKqK4Mx5fBeBBgNhaDQdBDTalUogRdkqRwaS6UBMdxHo+H5/mYmBiqpKrVarVarSzLiLPMmTNn1apVDRo0GD16dHZ2ttFopHn9GB46NpbrOMMFx3GRkZGoxqezb+l+DmpahdpY2Fd3D34CILoaGRlZjfLYnE5nVFQUIcRutxNCPvjgA0mSMjMzV65c+eWXX77//vvPPfdc8+bNA4GAxWLBXvJ6vaidJIS43W6INZT39pbh+DIYDwLMxmI86MiyrNFovF4vIkoV4wZA4jZMJXjRzGazJEkOh2PdunXr168nhLz44otbtmyBMD0FHdx0Op1CoahevXKRUhZaz1+6Dn5JZgEa3oXqaIT2LyoGPsfPwbS6XbWrKmO1WjF42kcI7f8+/vjjsWPHXr9+fd26dY8++uiSJUuaNm3aqVMnWZZD+0WilVOosFb5ca/Hl8F4EGA2FuOBBhVb1DOEhmsVIMYD84jneZVKZTQab9y4cfXq1X/9619nz5598cUX586dW69ePSxA9SaQ865UKjFUyMoj/aXqQ6v3qbgrjJ7Sv4JvoY4S5XXXr18/dOhQ37596TIKhaIUA4JO+XhBBQsqOBz8B4EcA9pC02cAqMA3atTo9ddfP3369LRp065fvz5p0qQ2bdo0adIkdPfeTQ/pP0gZji+D8SDAbCzGAw2K3iHbw/M8bWZX3rXuKAmUZfnXX3/9+eefP/roow8//HD06NHt27dHdhFil6FqUtBYofMWjX+V6zjDBaqaqT0EK7ZYh+BQYPvSYwEHiSRJW7duPXXqVJ8+fbAGHLJS8rHwE2q1mu43Kv0f9m0sD1BDSu0qOuzQMFxCQkJ0dHS3bt1ycnK2bdv2+eefezyel156qU+fPiaTCQZWeW/vvR5fBuMBgeljMRj3AC1wozV9tIdxaBsZlUpF6/4w8fj9fqVSiSx1SZLOnz+/efPmnTt3Wq3W/v37v/jiiyxzJZSSXCBqtXr69OlvvfUWCZEpp64+BiFEluWrV6+uWLFiw4YNLVu2HDt2bOvWrY1GI0zVYrd31Gz6fD69Xk/tNlEUsSTO4VDNC3aWMhh3hGmQMhhhAA6VO1oAXq9Xo9Hg8vF6vWq1msYcqRFw7dq1jRs37t27V6VSde/evV+/fkhad7vdyJ5h4ZXSCZWDh+B49Yr6VQA0Xc/hcFy9enXNmjWzZ8/+4IMPunfv3rFjRywQ2qES3ibkt6EVT6jNCukHCLdW2iYxGFUeZmMxGGED8o80MkL7h6CBMRVkkiTJ6/WaTKZr164dPXp08eLFfr+/T58+ffr0SUxMhIAQSsCqkQpDxVBKb2bM+tREKI+ui9Wa2+/hOTk5p0+fXr58+eXLl/v16zdkyJCYmBiv10u7GhiNRmpXQUkExYmhuhh3dIMxGAzAbCwGIwxQ7cpiVhFiLtSngv7HPM9fvnx5+fLl33zzzUcffZSSkvLQQw/d0e9CjTZmbJWOy+XS6XToR0luOwqM0KQop9OpVCpDW27n5eX9+OOPmzZtCgaDffv2HTBggN/vR0ErIcTj8ZjN5jsGBCumMpHBqL4wG4vBCAPFWuOJoiiKYjAYNBqNmISgwXjjxo3ly5fPnTt3+PDhqamp0dHRCQkJ8BAQQvLy8mJiYgKBAC7LCihjvP9AAAu7DiULlT2iqgLKMIslqFEjCVFsl8u1fPny2bNnP/PMMyNGjHj44YetVivHcTCw7Ha7RqOhge9AIIBuRcyiZTBKgtlYDEbYQNzkdtWA69evr127dvfu3TVr1uzdu3e7du1q1qxJCMnJyYmPj/f7/ZCVJ0WlgviWKIq47pixRSkpVojmQqHaS9Wrb2N54/P5NBoNVaxAMFqr1SoUCnSIKibisG/fvhMnTkyaNGn69OkNGjRISUkppcE2srUqYCsYjGoHs7EYjDCAawSypfiE53m3271t27YNGza43e7+/fs/99xzkZGRSqVSpVIVm5aoQQBnAxJc2BV3ryAyi4BsaCyMQSkpugdRDGrCqtVqn8/n9/sPHjz4yy+/zJo1689//nNKSkrTpk3xdWiIMOufwSgdZmMxGGHGbrcfPnx48+bNhw8fHjp06BNPPNGuXTsSUuKOqwl/RVFEutXtl1hJXjHG7dxuOsiy7Ha7q0tPofKGnl1+vx+qYLSHNA38UXXQQCAQCATwDICJIDc399SpU//85z+zs7MHDhw4YMCAuLg4PAwwM4vBKAVmYzEeaGjZFBWhJkUJwkqlEt4mQRCwjEKhQCl7aBE7loH9tHnz5nPnzr333nuffvpp9+7dW7duzWIojPuJ3Nzcf//731u2bNFqtX369Hn++ectFktoFwSqvB8MBjFlOJ1Og8FApw8qIQETjT5CsApQxn0Js7EYDEIICQaDKADEBYDsYEEQ1Go1DC+3220wGJC/otFoFAqFzWZTKpUWi+Xs2bN79uzZsGFDVFTUhx9+2KRJE2qTVfZmMRhhxufzSZJ08+bN1atXb9mypUWLFmPGjGnXrp1KpQrNJsSUgctKpVJ5PB5cWXesQqiOjYwYjLuB2ViMBx06Ddzxv7m5uTVq1MBr2hlQoVBkZWVZrdaTJ09++eWXOp1uxIgRTZs2bdKkCRLVg8GgyWRiucCM+4xibXACgcD+/ft37949Y8aMN998c9iwYbVq1TIYDHjGoP257XY77V0NisV20TiyJBVfBqP6wmwsxgMNzu1QASrUTwUCAZvNFhsbSwjx+XxqtRrRDaVSmZOTk56ePn/+/PT09F69ek2cOFGlUqGXc2jkEWXtlbhpDEY5Icsy2paHRswPHjy4bNmy9PT05557bsyYMfHx8S6XSxTFqKgoUtS+GkWydrvdZDIh/SvUgxXqBmMw7g+YjcVgEFmWSZGbiuM4dGQjReV+hBClUnnlypWCgoLVq1fPnDlz+vTpffr0iYiIoC4uPKljYY/Hg567VJGBwbhvoN4pQggNl5OinlE+n2/9+vVpaWkxMTFjxozp1KmT1WqFQxeTTWguI2YWOIaLrZnBuD9gNhaDQSRJwr2ePkbzPK/T6QKBgFqtzsjImDt37rJly4YOHTpw4MA2bdpoNJpgMIj6dq/XGxkZiW8JgqBQKJj7inG/4nK5UKqJljt4hAhtp0PDgqdOndq8efPSpUu7/z/23jzOxvL/H7/us2+zm7HvIlkiYkgJRZRSJEqRsr2VJaEUX2NJ6P0eRFLvENnJNkiWNCW7t9TYlzAzZjszc/blXq7fH8/PuX53M3MmxpkZw/38w+O45z73ue5reV2v67U8X926Pf/8802aNKlRowYhJDc3F5YtQF7oU6nhreDeg6JjKbivwdKgoE7hYm5ublRUVGZm5saNG7/99tvmzZsPGzYsNja2bt26jImR5cCDpb1A6JXT6cR6UeKxFNx7YNFUgiD4/X5CCGN7ZzSnJFDR6NChQ0ePHh0zZszs2bPbtGnTvHnz6Ohom80WHh7OrMWKr1DBvQpFx1JwX4NNbPbhypUrKSkpq1atysjIeOedd+Lj42NjY6FOkcAWgnK5LpfLaDSyOC1RFN1ud1hYmHIcV3APQ54jwo4cWAKwB2u1WnA3sEOL3W4PDw/fvXv34cOHp0+fnpiY+Mwzz1StWhU5hnJqLgUK7jEoOpaCewqIqQK3p0ajUalUILBWq9U8zyPWit2MiiI6nc5ms1kslgMHDqxfv/706dOvvvpqsD0AewnYsArkWClQcM8D89/j8aAODw4b+BPciNhR2PUCpilJktLT069fv56QkKBWq1966aXHH3+8YcOGHMdZrdaYmBhsNMyyheewOC34GdkZRtmVFNz9UHQsBfcskNyHwPMCZ2W3263X6+ElPHz48MGDBydMmDB79uxGjRo9//zzoig6nc5gvgwUGYTc53ne7XajorMCBfcJUGw7JyenUqVKuCL3mENbwm16vR7qkZysAZ707OzskydPfvnll2q1etSoUW3atEEgF9PP5IwPcmWO/ZxiMFZw90PRsRTcU2BiF9oVE8Gsjg3bAI4fP/7TTz8tXry4W7duPXr06NixYwH+BSBYTC4L5PL7/SqVSsmHUnA/gAVa2e12SilOF/Jy0XJrFpGZmgqz0GVkZERHR+t0ukuXLn3//fe7du2yWCyTJk1q165dfn4+I9nKysqKjY3F8202m8FgUHJ1FVQgKDqWgnsNPp9PEAScevPy8iilYWFhKIPD87zH4zlw4MDSpUtNJtO//vWvLl26cBzn8XiMRiP+lU/1InPLN23atGrVqu3btz/00ENz5syJiYmpXr161apVy/OdFSgoE+CYwU4sN2/erFq1qiiKDodDkqTo6Gjm10tNTa1Ro0aBYwnbYFihQ0mS/H6/wWDIysqy2WzLly//9NNPExMTW7Ro8cQTT7DflTsN8dnn8ylnGwV3P4LpWEpNAwUVFZD+Xq+XEBIVFRUdHQ0t6ujRo8OGDevRo0daWtq6deu+//779u3bQ9wbjUan00kIQRSXw+EQBAGnc3k0iUqlunbtWp8+fSZMmOB2u3///ffffvvt0Ucfzc3NLa+XVaCgLKHT6eBnJ4Tk5ORUrVp1zpw5Wq22Tp06iYmJGRkZW7ZskSTJ5/PVqFEjPz+f4ziQxhFCcFbBmqKUYoWqVCrk3kZFRdWrV2/mzJnQ2w4fPly5cuU5c+b89ddfPp/PbDYz2xiOTxzHKQqWgooLxWSloEICMRyQwpTSrKysS5cu7dy5c9asWZMmTZo2bVqVKlWMRiM0J6PRyHGcKIper5eVUZMkCc4OGHFZJBbHcSqVKikpyWg01qpVy263R0dHT548GTUNy++NFSgoUyDPIzc3t1KlSnPmzElJSXG73ZIkHT58uHfv3q+99hohBBGQLJpKTuUAMOMWqHpZSWl89+WXX87Ly5swYUJSUtLw4cNjY2OfffbZBx98sEWLFj6fDxmLOp0OhueyfHcFCkIFRcdSUCEBsW6z2ex2+9dff7106dJu3boNHTo0ISFB7uxm0SHIKler1X6/H9oSpHaB8rSMuAFVRBITEz/66COO43ie79SpE1gZFSi4HyAIgsvlio6Ozs7Onjhx4nfffQdDVOfOnb1e75UrV0RRBCUp04FUKhUSfvEEjuN8Ph+lFKoSLjIzFXQ48PrGx8c//fTTWVlZa9asWbx4ca1atUaOHNmoUSOTyeTz+RQFS0HFhRKPpaBC4urVqzt37ly9evVDDz3Uq1evZs2a1apVi/FX4R5RFCHZ/X4/+B2QJ4VoD8bIgK1CpVJB9GMt3Lhxo1atWnjO6NGjhw0b1rhxY2V1KLhPwOKrJEmy2WxgbF+zZs1LL72E66tWrerXr59arc7Ly4uKipKnjBQG+LQIISqVSqPRMHMXvuX1eqFFMfXryJEje/bsmTx5cmJiYnx8fOPGjZWUXgV3OZSYdwV3C+Tp2ewEzNQgQojX69VoNAUoQzGDc3Jy9u7du2bNGpPJ1KlTp27dutWuXZs9WZ43focQBOHMmTMTJ0784YcfCCEWi+Wzzz4bMGCAPLFcgYJ7FTiHsHo477///r///W+TydSkSZOxY8f279+fEALmd0bbGxKwKHs8//Tp0//9739XrFgxb968Rx55pHXr1oQQr9fLcRxuY/H1jL0FF2FRI4TA9R+q5ilQEAyKjqXg7oI8DxyATMQ0pZQiQxDuiZycnFOnTu3Zs2f9+vVvv/12z54969SpExYWJucLJaGuNWu323U63a5duxYuXLh//361Wv3TTz89/vjjoXq+AgV3P3ie93q9YWFhiYmJ7733HgKwWrVqtX79+lq1ahUglgsh2KFLEASr1fr7778vWLAgNTV12LBhb7zxBqovIKQSahkouxBHzx4it64hHFM5IykoJSh5hQruCiCtjxACxilIcGhXLpfL7XYTQmDEUqvVBoNh//79s2fPfuCBBw4cONC5c+erV69+9NFHzZs3B3Eoz/NOp5M5B0N4PJg+fXp4eDjP8z179ty9e/fUqVNFUfzuu+9C9XwFCu5m5OXlwbun1WoRhjhw4MD09PSxY8eKonjx4sUpU6YgYZDn+RAqWB6PBx90Oh0eq9FoKleu3KlTp6SkpJ9++kmj0bz88st9+/bds2ePTqezWq16vd7n8xkMBkmSoGD5/X6n0ykIAswHgiCAmUVRsBSUPRQ7loJygCRJXq9XXqCGxX/4fD6fz2ez2TZu3Lhw4cJu3br17t27efPmsbGxJGCpys7ONplMBoOhQGXZEPJBjx49unfv3mDuoZSeP3++adOmH3zwwYwZM0LyfAUKKgTcbndKSgqltE2bNhkZGREREZcvX+7fv//58+ePHz/evHlz8ncHX0iAQxeKIcI8wDySlFK32339+vVt27atXbu2S5cuzz//fJ06dWrVqiUIArKA5cyoJBBhCfYWhddUQSlBsWMpuCvg8/lAmA4hmJOTA1odxE/k5ORs2LBhwIABQ4cOjY2NvXjx4qJFizp06BAbGwsDGExWsbGxZrOZKViQxQUqpt0hXC7XM888s3HjRo/Hw3HchQsXRFF88803Q/V8BQrucmBh4jDz3XffuVwu8KHUq1dv7NixPM9bLBa/3+9wOEKuYCEbUa1W63Q6g8Gg0+k4jvP7/YIgcBxnNpsbN248YcKEo0eP9urV6+DBgy1atJgzZ86BAwcQPYakYOQ8wiKuVqu1Wq2iYCkoeyg6loIyhV6vx7ESgVaVKlXiOC4tLe3QoUP9+/d/5plncnJyli9fvmvXrv79+6OWM2S9wWBAyAU8FKCN9ng8Pp+PEKIOIFTtbNasWVpaGs/zJpOJ47idO3fu37+/fv36oXq+AgV3M/x+P8dxbrcb1RQWLVr04Ycf3rx5UxCEv/76a/fu3VOmTImIiNDpdCaTCWankACxmIzZDhSmbrdbEASdTgcLgdPpRKCVVqt99NFHP/jgg5s3bzZo0GDXrl3NmjVbsGDBhQsXjEajSqXyeDz4LgmobqFqpwIFtwjFV6igrCEIgiiKer3++vXrNpvtu+++mz9//uzZs9u2bRsfH08I8fv9zDUAsajX65HihPkZzDcR8ph3o9Go1WrZopAXx1Wg4H6A3W6/cePGX3/9Vbt27aVLly5YsCA6Ovqjjz4aPny4Xq8XRdFut4PjKrTgeR6GKHZw8ng8Go0GCxwMpYw7ngTiBNLT0/ft27d161ae57t27dqnT5/KlSuTQAqkUlhaQelByStUcFcAE+z48eN79+5NSEgYMmRI//79mzRpEh4ejrhUTMUC2hJY3WHHgk+Q3UwphThmFrKQQN4AMGkVToRUoOAeBuNVgR1Io9FAj7HZbGazGaWoTCaTWq0O4dmmyBRFXOR5HlV65PeAPl6v13s8HoPBwGq6Z2Zmov50eHj4K6+80rNnT4UvXkGpQtGxFNwpGK8VdB2mcyC+CpzOCGNnQpmJY6gpV65cWbdu3bFjx3Q6Xf/+/bt16wbidSXfR4ECBSEEpJMoiidOnNi3b9+kSZMmT5787LPPtmzZEtaswkcmaGCgVGXEYADi5QtUNVWgQA5Fx1Jwp/D7/ZIkMeM87EmwMLF7eJ7Py8uLi4sjAfMPISQ1NfXHH3/ctGmTTqd75ZVXWrZs2ahRIxJq154CBQoUAKjoYLfb9Xq9Xq/PzMzMyMhYtWrV3LlzExIS3nzzzSpVqiAMADVMMzMz4VWklEqShLNfkb5FURQFQVDC5xUUgKJjKbgjQGaRgI3K5/OpVCpmeAcDDaWUmetx7NuyZcv27dtPnjw5YcKExo0bt2zZUk7yDlBKc3NzY2JiyvydFChQcG8CtnZ8djgcZrNZpVIhC/Ls2bOJiYkXLlwYPXr0oEGDNBoN80VqtVocGnGAtNlsTKCpAijX11Jw90LRsRTcESB05FUsCCGCICBbByyFEFIej+f48eNHjhwZP358YmJiq1atWrZsCf0M4Hne7/dDP/N4PPJ6sQoUKFAQErjdbq/Xy1x+qAqPz36/PyMjY+/evStXrjSZTEOHDn366ac5joNQys7OBhufHLBvEUIUd6GCIqHoWAruFC6XS6fTyQ3s7E+IOk9LS1uzZs3PP/8cERExdOjQ+Ph4lUrFQrjy8/OrVKkiP19H95owAAAgAElEQVRKkgQWnPJ5HwUKFNzrQCmIqKgonABZ4Dy74fz587t3716xYkXr1q1feeWV1q1bh4WFgbqCEFIgt1EURVDJKLukggJQdCwFdwREfWJuMOoE1KzweDx79+7dvHmzTqfr06dP165dEaGF46AkSQ6HIyIiAs+hlIIylBCCYCxKqc1mkwd1KVCgQMGdAHJGrkuBmNRkMrFjXl5eniRJiFLIysrKzMxct27dzJkzp02b9sQTT3Ts2PEfn6lAAYOiYym4UyAIlBnST548qVarp0+fbrPZunbtOnDgwLi4ONzDDF2IvsrNzY2Kivq/wgKy1GuUxShcEkeBAgUK7hywXUEcGQwGhF6Jolgg1YZlEYJG69ChQ0uXLr18+XKvXr369u1rMpnCwsKUIHcFxUPRsRTcESilKLyam5t77dq1xYsXb9iwYcqUKZ07d37wwQf1ej1Lw2E1bSDgwNvJDpHsr/LcaWXWKVCgIORApg4TNYxDSxRFFhLKJJLD4UBcKa7Y7fatW7d+8803jRs37tChQ9u2bevUqQMxVSSJl4L7HIqOdZ+C8SPAQ8dIPplK5PV6WYl7gBVkpZRiMoCF4eTJk1u2bFm4cOGgQYM6duz4wgsvlM8rKVCgQEGZwOfzWa3WrVu3fv31161btx4yZEhMTEy9evVApsWYAvPz86Oiothm6vV6USoRNVjj4uLkZ0u/3y+P8VJwb0DRse5rFCbcwxV2HXXBdDqd1+tFlDojUj937tzhw4fnzZvXvXv3p556qmrVqg899BAewggdFChQoOCeBOP5O3bs2Pz58w8ePNivX79nnnmmadOmMTExTHkSBAEnVWyuhBCbzWaxWJhRn+O4AuwPwSi4FFREKDrWfQ0YsURRBL2eVqv1er3yshIQEGz98zyfm5ubnJz8/fffS5L09NNPP/XUUzVr1gT5Hs/zKpXK4XAogeoKFCi4h8H8AKwOT2Zm5rVr13bs2LFo0aJhw4aNHDnSaDRGRET4fD7cAJ+jvG4PuOMJIQV2W0KIUp7rnoGiY92/EAShmCoQLElQFEUUYP7ll19WrFhx8+bNbt26de3atV69eiwrUG64ClaYWYECBQruGbhcLpPJJEmSx+PBFcjACxcuXLlyZeHChXa7fdiwYS1atGjSpAk8A+DigiexcCF5URShuinuwnsJio51/wJMocwozfO8x+MJDw/ned7hcFgsFp1OZ7fbjxw5cuHChXfeeWfq1Kn9+vWLiYlhosFut5tMJjYxEE/g8/m0Wq0yWxQoUHAPg4VViKIIcZeVlaXRaJCKyPN8enr60aNHt2zZ4nA4Xn/99Q4dOlStWhVOA9jAUIVMpVIxIQxGU3gPy/HVFIQQio51vwPlbhCJSWRBBhcuXFi/fn1ycvIDDzzQp0+fTp06SZLkdrt1Op1Op+N5XhRFg8GghA4oUKDgfoNcVYKZnxnvvV4vSlxAT3K5XCkpKceOHdu4cWODBg0GDx7cunVrQohWq5WHw7L4LaIUbL23oOhY9ymYLsVqnRJCBEHIz8/fuHHjli1bwsPD+/bt++STT4aFhaHmoNFolBuxfT4fChFiemAaMQ5SRfFSoEDBvQp5VhD+BcOfIAgQfSzuilEDchy3devW1NTU0aNHT5kypWvXrm3atHG73eHh4RzHoSSGEmVx70HRse5reL1eQojBYHA4HDt27NiyZcvFixeHDh3ap08fWLwLh176fD6e581ms1zEQEtjGctKwKYCBQruYWDjZLGnch9fVlZWXFwckYW0F8izzs/PP3bsWFJS0rFjx9566634+PgmTZqQv0e+K7hnoOhYFQz/qMGwkUIJLUaOx/O8nHwFz7HZbP/73/+Sk5OnTp366aef9u7du379+mXwFgoUKFBw3wK08qIorl69evv27SqVqlevXu3atatRowbq+TA6U+ZwkF/Mz8+PjIyEYseEvIK7E4qOVbHBEn0Zc3FhIjue50mgCKDdbpckKTIy8vTp07t27Zo+ffrw4cM7d+7co0cP8vf1rECBAgUKSgNSABDL6enpa9eu3bFjR4MGDfr27dugQYM6deqACgeS3OVy6fV6jUYj34iVsK0KAUXHqqiAdsWAkYKxSm7oYvZqh8Oh0+l8Pt+GDRu++OKLli1bPvvss02aNGnYsCHuVFasAgUKFJQGCvgfUC1DkiSNRsMqani93mvXrq1cuTIxMXHKlCmdOnWqW7cuisACPM/zPK/X6+VHaFbNrEzfR8EtQ9GxKiSK5Klzu91s+bndbkKIwWCAbdlqtf744487d+50uVzdunXr1atX5cqV8S2bzRYREYFQKlZ5XoECBQoUhAqI1iqyqIZcbsMLwXHcnj17tm7dumHDhrfeemvAgAEPPvigWq1mfgZs2/J/y+etFNwCFB2rQqJwVBYrcUNkLKCCIBw4cGDFihUXLlzo0aNHv379qlWrptVq2V81Go3T6dTr9YoFS4ECBQpKCcHY2+WUDV6vF0U1eJ5HTuLly5evXLmydOnSK1euvPHGG6+//rpOp2N5RVarNSYmRvE/3OVQdKyKB9iZ5WcXucrl8Xjy8vIyMzPXr1//2WefJSQkdO7cOT4+Hn9l4+hyuQwGA5Y3AidFUVQIFxQoUKCg7FHAHJWdnc28hD6fLycn56efflq2bFmlSpX69u3bunXr2rVrM4ochSvnboaiY1U8FNCxBEHAuSc/Pz8tLW337t27du2qUqXKa6+9Fh8fHx4erlKpKKU47mBA4RMURdHr9QqCEBERAS0NFbXK9eUUKFCg4H4B9lYSyElyuVx+vz8sLEyj0YiiSAhhx2AI/BMnThw8ePCHH36oXbv266+/Xrt27erVqxfghlBwV0HRsSoe5DoWpRS0wtu2bduyZYtare7fv3+HDh1A0CK3b8mN1Uwtw/AjCEBx6itQoEBB2aNIAlKchBH44fF4dDod07dyc3NTUlKOHDny8ccff/jhh926dWvfvn05tV3BP6DC61her1etVmu1Wji2/X4/IaQCGU6ZGiRJkt/vZ7Wr5AMDFnUknuj1eqY2paenX7x4cfbs2TabbciQIW3btm3cuDGRFXsObTgkOy3JHY5msxl/BeMLSzYmMsHh9/tROAKNKWzZLmZ2VfTxrSjAPGQzjQTCbxXNO7SQU4Eje1+j0Si0vSFHyeYzBBETUOzrLpdLo9Ho9XoUdS6G46a0x1cev0UIcTgc+/fv37Nnz4kTJ1566aXevXvXrl0bYr9wYWkoal6v12w2s+ew4F0I8IoS2lWB5FWF17GAvLy8yMhIjuMq4h6cm5sbFhbGJrfb7TYajcxQXEA1EQQhNzf33LlzO3bs+Pzzz//zn/906tSpRo0akiTBzVdKNRmYusbzPOLr0clOp9NkMqlUKvniz8/P1+l0EEOs5E4BRyQUSrZCikeFHt8KAaj4BoMBwXkKsWGpAqcg0IKz1P3ybtQ9hRLMZwhY6E8Oh4PjOBwpoVcRGX3gP55dS298fT6fTqcDpzQ7bzudTq/Xu3nz5qSkJL1e/8ILL3Tv3j0qKio/P99gMCBGHroUk+FarZZV++F53u12R0REhKSFZYMKJK8qvI4FU0qB3Aqfz1dR9HFM+gKWpwsXLlSrVs1isaDb2YLft2/f0aNHFyxY8PLLL/fr1++BBx4ghMgJVHAWYRRZIewHh8MRERHB9CSe5zmOk08Jlpycl5d37dq1li1b4u3Cw8Pz8/MJIQaDYdSoUf3792/RogUJUHl5vV6s82Co6ONbUaBSqTIyMqpUqSLXlZVY2pBD3qXoanS7JEnl27B7DLc7nz/55JOPPvqIEBITE2O1WgkhjRo1On/+/DfffPPyyy+HhYV5PB51AEzrKozSHl+mq4miCBEKIcyYETMzM5ctW7Z9+/bGjRu/8cYbzZo1Az8ia9uNGzdsNluNGjUiIyNRiBayNCsrS6VSoYTa3Y8KJK+C6Vj/V4kFNyFimt6VYGpsTk6OJEk2m61823O7EATBarVSSr1eL6UUXj9CyL59+zweD6XU4/Hk5OQsWLCgffv248aNS05OTklJKfAEfN3v98uvs54JIXiex8QQBAG/i38RPs9us1qtkiR16dKFELJ8+XJKqd1uf//993U6Xb9+/dLS0tidbrebFju7Kvr4Vjj4fD6/349hVVBKEATB7/fDxKugVHHr8/n06dPYrS9cuEAp3bx5c79+/QghU6ZMgYimlObl5d3Kj5be+PI87/f7IYQppZIkeb1el8vFxK/L5cKH48ePf/rpp1qtdurUqcnJyZRStkEQQpo1a3bkyBFKaX5+vsPhCHk7ywx3v7zCYLE9DmoWpbTC6FiU0gJTpCLOGKfTSSndv39/kyZNQHP16quvWq3W5cuXd+zY8e23316xYsWNGzdycnLYZGKrPTc3t8DTYEcNrY7l8/mwenmez87OxkXMCp7nfT5fgVnu8Xji4+PVavW2bdtw26lTp+DBTEpKYjdj2Rff1HtgfO9+XL9+nQbmIaW08IAqCBUEQWC7Lzocna8ghCjBfL527Rr2u4sXL+LKDz/8AK3r0qVL7Dam3wRDqY6vJEkej8fj8RSWmaIoyrU6JDZRSn/++ecxY8YQQubOnXvmzJmtW7fC46lWqydOnIhusdlsTDmrEKhA8iqYjlVhfIUAmgePks1m0+l0IQ9IKiXk5uZWqlSJUjp69OgvvvgCYViEkOjo6CpVqowaNeqFF14wGo1qtdpgMCB8klKqUqlAQIfcE0b4y/M8rNmkULGdO4RKpSqQIUwp9fv9BfrZ7XZzHAf33+OPP37mzJnVq1c//PDDVapUOXnyZOvWrSMjIzdu3Ni5c+fbIimuuONbUcDSVFlwrjx1XEGoUKBXqSzlpTybdc/hduezIAiXL19++OGHNRrN5cuX1Wp1pUqV9u/f36VLl2rVqv3vf/8zGo16vR6uqGI4bkp7fOXSEoqFTqfTarUs/cjlcrHYXEEQPB4PPlut1t9//33jxo1LlixRq9U8z0dFReXl5bVo0WLx4sXx8fEoj1hRci8qkLyq8L5CWDW8Xi8afIu23LsKSUlJDz30ELpaq9VGRkbi8/bt251OZ15eHjs55efn4wNsV3CcQbUSBKHwASuEo4bfYg9k/exyuTweT5FnO1RC3LZtG6XU7/dPnjxZo9EsXrwYjWeG6+It6vfA+FYU4PjucDhgfi/v5tzLgCMJc5tZTRSEFrc7n9PT0yF4z5w5QylNTk7u3r07IWTlypXsHmY4KR6lN74IrmCQW7OY4YrK3A6AKIoej8flcmVnZ2u1WrVajdMy9DNCyEcffRTCRpYNKoq8uhd8hcxI6HK5CCEVKz8CYGVwAJyWmNGIxVeimCCz4sBcxCLcNRqNPOIPQfShamFYWBjULMxmaHvyBY8YTHyGztejRw8iKx/Rpk0bt9vNBou5/P7R93cPjG+FALJW0dUY5btZclVQFOhYluevIOS4rfns9/tTU1NJwBAC28/48ePZX10ul91up/90Jizj8YXCxP4rt3IhHB5XtFot2x3YGR4vixs0Gk3xuUd3ISqKvAqmY5VRIjFiA9l/eZ4ngeQ4+ndXFw1ssfLr+KxWq5G5ZjKZOI6z2WxIZyOEMI+SyWQqDSuiXI/Bb2HKyst/4gbYCZm/FR9iYmLQTjgBWTvRJ06nEzfb7XZ88Hq9hBAckgghHo+HEIJ5Rgp1Jkb3dt8oWDudTids0XgdUMNjWYqiCJ4S+bfUanVWVpZer09KSpo/f75Gozl27NiJEyfUajVMbmibIAiMcwuvhlFm705k5l+w8NlsNhLQO0t7fG8dxY9v2c9DgA1KkRwZ+Ar632Qy+f1+TFowzZAA9zTGgq079D/A5htWrhx5eXnyWVrRgddn7wvXDyqvs4vsTfHioEfCB9Y/+JNWq4VFhC12EhiI0p7GcuYkDDfmhnwKFUjalc+TMmtnCSBvJ3hecEUQBK1WK0lSMcnIjO9GFEWr1TpgwABCyG+//QbZq1arTSaTVqtFrg++wqYExhQCDT/xf84gQlB8EPeXRr8hCUn+FuwzmoorOPRipmGvBERRlCQJMpk1rFzGF/qf3EyACBn5PQXmIULdieyt0Rt4TVEU5ZsIbvP5fNB12EW2e5LACJYZyiIeC4oUSz0VRRF2GiA9Pf3AgQOHDx+mlNauXbtly5aPPfYY6ChJYEv+v9ixAOuBIAigDGG9plKpjEYjx3EwnJLQTR0sWsZ7hnFVq9UYJ8wVn8+H/5rNZpfLZTQaQUni8XhiY2Ozs7PZ00qvnbcLURSLbCfHcQj2YhFU6PkCnHiUUmwtWq22Y8eOycnJ+/bt69ix4zvvvPPll18+/PDD27Ztq1WrlvwXs7Ky4uLi6N9p+hCzj1MaDgHoW4vFgrgBJsLu8n4Dyn4eIgIADD1YLCqVKioqymq1soUMzViSJIPBAPVdLn3kTIwajSYnJ6dSpUpOpzMlJeWXX37JycnR6/WU0hdffLFOnTpRUVGIlsvPz4+MjERaNZ6DSrf3gLUGPUAI8fl8HMfJ3wgcladOnTp//vzZs2f1er1arW7Xrl3btm3lgTsgWCrAewK6I61Wi44CayX+JN/57hB4DjvboL4WjklMNdTpdPIdqMh5QgIDWkrtvF2AqqZwO5m1iQQm8D+Gfp49e7Zt27YOhwP1XkePHr1nz57XX399+fLlBbYnplcViMqCHiOnJ/V4PPgvltjd32+knMYX+gakul6vx7kdfyqynYziy263o062fHzl6orX601KSkpPT09NTZUkqXnz5q1bt65atarFYoEugUcxMUUpdbvdjFv7zlGe8Vg4yfn9fniLPB4PemHJkiVdunR58cUXV6xYcenSpQsXLiQnJ3/22WetW7ceNWrUjh07aKG4HKfTyXYvhpo1axZ429AS/bEFhqFiD5dPR7nWyNYeLhqNRojs0m7n7aLIdoJUl8rSahAKwAICfD5fASt6ly5doqOjN2zYQClNT09/4oknDAbD6NGjKaVZWVnyQYT5CgQQubm57JnwRbI0SegxLKi/QvQbKb95yE7tHMexthmNRrlToFq1aiRwiIyKiqKU5ufn5+bmIkWcDSXP8xcuXPjwww8JIV9//fX69esvXrx48uTJ48ePT58+nRCyYMECt9tdODWJRRBWdLDceKfTiYXAiq+7XK5ly5YRQhISEjZt2nT+/Pnjx48fOXJk5syZ7dq1Gzp06LVr12hRoYQOh8PpdKLz8S+Go8iRCi3k0c16vV7uP0JLgs2TMm7nPyJYO5l1gAY2r3/MBzx79qxarUbMuyiKu3btIoRwHLdq1Soqm8mZmZnyb/E873Q6IfrYkkHqHz5bLBaj0VhR+q282onTYGRkJJuKaEywdkJfQQ/7fD6v14txQUCLy+WyWq03btwYPXp0y5YtFy9evHv37jNnzly5cmX9+vWvvvpq165dP//8c5vNhuBgeIGpjPkihCi3eCzIKUxcmGcppWlpaU2aNJk5c2aBZFc2d/fu3duzZ89x48alpaXhTMamvt1uZ0YjzAx5sKGcvSkkQJPYY5n+wQaJMUgVRn5+PgsqKu123gnk7QymY8mjLNkwORyO1NTUDh06qFSqTZs24Z7NmzdD+UhMTGQ/EWxOu93unJycAhdxdqxY/VZe87AwkRgGzmazyTcbp9MZExMjLxbOBhGmdb/fn5ubu2zZsiZNmuzbt8/tduNoxMYde8mOHTsIIUlJSewKdh2Px3PX5lSXAGzjZLHPv/zyS3x8/LRp0zCTcXhgXU0pXb9+/cMPPzxu3DiPx+N2uwvELFNK4aLSarXwyLPrRQ5iiVHMKHg8nsIxRsHmSWm383ZR/HzGxVvUsf7880+s2cuXL+PKvHnz8BxMbLYT+3y+AuOI7i18nbmrKkq/lVc7MUYul4tt6LC8BGsn00xYU5lowudJkyY9/vjj27dvZ98F7QV+KDs7e/78+W3atLl48SKknNvt9vv9TqdTHjQcEpSbjsWkFRu/JUuW1K9f/48//kDnytk+sIXcvHkT/01KStLpdN988w3+y0aFxQwRQiwWCy7KJz1sJCEBWoWfFgQBlFFsd2F7HmOik6tfNOAtslgspd3O20WwdgbTsQrvGVar9Y8//oBDEEW+nnzySdhF3n//fRJwqOfm5qJn8HNsebBflyQJPcl+EUZdjUZTgfqtHOchz/Mul4sZXeTKEwQKrshPk7gBURr4fPr06ebNm3/22WdWqxUDgX9xAxZvXl6e3+/PyMgYOXLk+PHjPR4P2sP2pLtKAy4ZJEkqQHV4/fr1hQsXPvfcc+BypLIcWxb2wXpg5syZPXr0+PXXX/FfQRDQJ3CIMKlFKcW4uN1uKWAnDtU8YcMqiqLD4ZALK0YsjEpc7KeLmSel1M4SvFeR7bxdHWvatGl6vR6bt16vT0hIsFqtPM/36dMHIZVDhgyRjz56DwFPkiRhb5YCqwZRQfhsMBjkVA53eb+VVzupzJIi/Z36K1g7UXcES4wtKJ7n9+7d27Fjx1mzZsm/zsYOlK2SJHk8nuvXrz/44INz587Fn2BpDrmqU246lhQIbEcjvvrqqwEDBhTQHyVJ8vl8rMfRm7DvXbt2bdy4cTNnzrxy5QqllLFiFt4w8By73S6FlJNTkiRGdM70RQgp9hbsF9nuy/YnefRlqbazZCjczmA6FvssTyrMzs6G5RY3ezweLAAsEibQGXuqnEYVyhY0kgJ7M8LhK1a/ldc8lJsV5YmZhZ9AAhEPqNvN8zwWmtVq3bJlS9u2bc+ePcsGQpIZLxn5NeB0Oh0Ox5w5c9566y1KaVZWFqOoDeH7lhfYeQ/qY05OzujRowcNGgR57ff7mem9wKqXJAkfkpOTW7RosWzZsoyMDBoYL4fDQQIpYPKpUkpgY1FA0soPtMF0kbJs552AyFKtceUW7VgQWQWmqyAI8jUrZ1SngR0Kn7EcmIiD5srCVStKv5VjO6HfU0q9Xi/mZ5F5gkQWoYUrNpsNh73U1NSFCxc+/vjjv/32Gw1YwljKYYGnIUCFUtq/f/8BAwZg34EdmgnSkKA8OUiRABgWFjZ37tw///zz22+/BaMmJqjRaJTHViMCl9XXROBzQkJCXl7e7NmzTSaT0+lEAqrZbEakodVqRUYJNjwa0uL2eXl5qAPFmDlTUlLg9LRYLM2bN9fr9VarVavV4hiEDkRgnSiKsbGx0BgsFkuptvN2EaydwWLe2bcwYQpn7jC+PtZjEOiIEGJxxISQ/Px8k8mk0+kuXLjw3HPPNW3atFevXp07dzYYDBaLBX7GyMhIp9OJ4PcK0W+l3c5g8/DGjRuRkZGtW7fW6/UOh0OlUpnNZimQ4Ab/FNqMSAhCiEqlysvLU6lUJpNJFMXVq1cfOnRo0qRJkZGReDJ+AitRktVkLRBetmjRor17927evJl9pXyHJoSAJTU9PX3+/PkWi2Xy5MkkkNakUqlY9gbKurNS7iyFnlLar1+/Vq1avfjiiyg2SggJDw/HAGGDYZIWnvGQc+3Kk1QQePfXX39dunSpbt267du3J4ESyPJfLzBPyqadt4hg7ZRvybcS8y7/K7btyMhIVgQaPySKoslkQl4L2ywdDoder9++fXtCQkJsbOwbb7zRpk2bxo0b41ugAEX+U4XoN1JO4ws6sf/9739+v79du3bR0dFQBv6xnYSQ7Ozs2NhYQRBGjRolSdK8efNYGDQWLBYmMhKQRyUFIvqxHF5//fWYmJh58+aRv8fLhwRSedWEZg9csGDBlStXxo8fX716dZwJWAexeyilkiRBNCC2GlJAr9cvW7Zs2rRpycnJNWvWdLvdtWvXzsnJ0Wg0rENJIC2CZfCGpP3I4cID1Wr1nj17Pvvss8cff/zhhx9euXJlnTp1Pvjgg+joaLbIcadKpcISZVnT2KtKr50leK8i20kDyX0FdKwCkoulXPE8z/M8C07EUMozSX0+HygeIMjk2/Dvv/+OutFoTLVq1eLj47t169a2bVtcBypEv5X2+BYzD7/99ttatWpNnDixcuXKhb8IEz078SNt0+fz6XQ6q9X68ccf22y2b7/9Fi2XAqT8NEA3gCHG17GZqVQqt9uNxfvf//73+PHjc+fOjYiI8Hg8Go2motfwxiy12WwRERHz5s07derU8uXLoT/xPK/X60G0DZEtiiLbCXCFBtyChJDJkyfrdLp33303JibGbrdXrVoVSZ0kIGnlRBuhmidoJCEEWxch5ObNm8eOHZs0adLYsWMJIcnJya1btx40aFBYWBhEK2uAfJ6UdjtvF/IGyNvJpiW55bxCJohcLhcjCMCwYpMmshw02AJAmK7VateuXdu/f38kD2q12ujo6FatWr300kudOnVq0KABlXmE7vJ+A8q+nStXrly2bFmPHj2qV68+b968F198cdKkSfLa24XHl+M4kKfo9fqzZ89+8cUX9erVGzt2LIabKQ94SJEKDJxORqMxMzPzk08+qV279nvvvUf+fg65c5SRjoWNB/MPEgcO0bS0tFdffXX9+vX169cnfz9M3CIEQdi6dWtiYuLSpUsbNmzIJkrxdpfbAnsCbADQFSRJQgoGfCuVKlVKSUl56KGHcPPixYszMjImTZqk1+vZ7GSkWYIgYIniv6FqZ0ggSVKR7VSpVMePH9dqtZUqVXI4HLBO6XS6ULEfoT8xMdLS0jp37kwI0Wq1KpXK5/NptVp0vlarRecTQnCdKd/li2D9RkI3vkXOQ/wWPO8OhyMuLm7//v1PPPEEfmXZsmUnTpxYsGABuhEbZ5FrHuY3Sqndbh8/fnxcXNyIESPk6UW3DjRv8ODBTz755JtvvgktsEjuhgpk32Jid+/evfPmzVu3bl2Js7udTueWLVt+++23Tz/9NDw8vMQ6we0Cx3185nn+hx9+WLly5ZIlSyIjIyEt33vvvaeeeqpLly7QMIqcJyTgoCm9dt4uimwnpVSn0x0/flySJFikDAaDSqVi6sf8/B0AACAASURBVOwdQqPR2Gy26tWrw3v4448/Tpw4kVGd6fV6lqCj0+lwHd4YRoVY+IBaxijH8WULH8tq06ZN69ev//LLL2GSd7lc/fr1mzp1aqtWrYK1E72HpvI8P2zYsIcffhhlGUsA0Kk89dRTkyZN6tq1Kw3QErGTElTtksmrYDpWiN2CrGWQUxzHQdN65513Fi9eXL9+fXj65EwHtwiNRvP888/HxsYOHDhw3bp1MI3IE9DuHB6PB9nv7CyOzkL6mCiKixYtWrVqFerhYDN76aWXRo4c6fP5PB5PZGQkXJwhbFLZQxTFgwcPnjp1Sq1Wh4eHw1qenZ3Nzhl3/nyj0Wi3210uF2OUgWWFBCaMxWKx2WwYC5xg7isUOQ8ZxajH4/nPf/7z/fffP/bYYyQg0Hv27LlmzRpEkJjN5mC6Dp4giqLb7R4+fPiTTz758ssvQ97dLmAM8Hq9CxYsGDhw4OOPP242m6tXr04C5FtyIVVRFCwSaOrp06e7d++em5uLuIUSyCtCiMViee2112w224gRI77++msSIKYKcYv/Dp7nY2Nj0ebLly/Xr1//888/nz17dlRUFCN1GzZs2P/7f//vhRdeKI3IkDIGNJuff/752rVrGo3G7XbDvliyISsMjuPy8/PDw8PdbndYWNjFixf9fn90dHRubi4hRKvVMgIth8PB1CwFUFM8Ho9arUYYg06nmz9//vz58yH2cXpPSEiYPHnytm3bijk/h4WFQaANHz78scceGzx4MMj5brdJMHTl5eXNmTMnISGhdevWYWFhUC4xjjj8k5CfCUMb8y4P+2LpLfPmzRszZgyVRYmWrBodotVOnjzZrFkzEmAALz5G+3aByGLkjjkcDhbqiOhgQghajn8RWzdmzJjk5GR5JKlc72a6ZmjbeecI1s6wsDCW+ieKIqIC/zGM9NYhp7r4448/CiwVvV7frFmzd99912w2y3l+abFJ6WWJshnfwvMQ6QIYGhI4g8p75p133klOTqayANIiYzAJIUaj8a233po/fz4NzOQS9K387ZKSkl577TVE30t/R4l7oLzgdru9Xu+wYcM2bNiA16F3EBvrcDhEUVy8ePGQIUNMJlPJYrRvF/Jg7Z9++mns2LH4jOQ4fO7UqVNOTo48gbTAPCmDdt4ugs1nk8mUl5cX2vhlOeSrY9WqVTj5yOuedevWberUqczeCWMhDeTo0fLuunIZX5ZOi//6fL4jR44MHTqU/SKbpc2aNZMnThVoJwSsy+UaNGjQwoULaUmp+DA9WDLQV199NWzYMHlTkRF/J50QLOY9ZLZBhNmqAkWRCCFer5fjuMuXL3///fcIGsVP+v3+EiihPM9jEjdq1Gj+/PmEELvdHsLgD9hLmF3XZDKByECn07lcrrCwsI0bN7777ruRkZGiKEZGRrJAvGeffTYlJQXRLaFqTDnC4XAwFV6lUuE4iBidkECtVjMWWZQv5Tiuffv2Y8eO3bdvn8PhOH369Ny5c10uF6OcYU6WcuqSMkWweRgVFZWXl2c2mzdv3jxq1ChCSH5+Pgv+I4T07t375MmThJCcnJzif0IUxQcffHD48OGEkMjISJ7nS2wP5nne7XY/++yzBoNhz549sLozkL8TylcIGI3G5OTk9PT0Pn36wLNA/s4wfOtAuI9KpRo2bNiDDz6I00Wo21sE9Ho9o5I5ceJEu3btkJnPiqUKgvDUU0/9+eef5ev7CyEiIyMZd3lhRf9OgL0TUb/4bLFY1Gp1fHz8rFmzTp8+7XK5tm3bNnLkSEmS7hMZdSvQ6/UISwX3uE6nW7p0ad++fSHfBEFAPIPD4XjppZcuXrwY7DnIPxgzZkzbtm1HjBjh8XgiIiIKJJ7fOmB6J4QMGTLk+vXrx44dIwEZhVhSlkVRsucXjRDaseTnJ5/PBx12+PDhqGfOVEh5Av/tgvFVhoeHqwIVA0NiP8DKRAQ3u+hyudDsTZs2mUymtWvXIk2U8S6KonjhwoWpU6dio6IV346FrDRJkkDEAGtKYXKsO/xpABwn6enpNNAtdrsdPazVasPCwgwGQ+F+K1+U9vgGm4d+v9/r9W7evNlisaxZswYJ5IwRkVJ65coVWIsLEIgUOFexnEcaWLAlozyWn339fv+5c+e6dOlS+JXLd5KXADzPP//884cOHWK29n+sZR4MyDNnIgsytrTtB5IkQRu4ceNGYmIiIeSPP/6glBag8tq2bdu6devwlQptx2LxzqVkx5JkbA6U0szMzIMHD/I8Dwul/E62pSp2LBog5cEZzOPxvP/++zqdLi0tjcoYBDFkmzZtgoZQZDtB/r5o0aLCb3S7YM/HAvn111979OiBP8mZmLCt38nz5e2noa0JzaL8mMUiPT39/Pnz/fv3lyTJaDSCJCYqKgofbgsYqpiYGJS6BJdJCIsNIchapVLBqQ9N2WQyGY3GNWvWLF26tHv37q1bt0aIHIJdDAaD3+8PDw8/dOgQLt4DZXEhSjiOMxgMKMemUqmMRqMYIvh8PqSxIBbkscceq1q1Kgnkv4CSgxACKQZTKBp2n5wRg81Dv9+/YsWK1atXd+rUqUWLFnBVQMPT6XSiKBqNxl9//dXr9aJSXrDni6IYFhaGw5xer09LS5OXXbt1aDSajIwMlu34wAMPNGrUaOfOnbQo7roSPL+88Pvvv/t8vvj4eLCHUEqNRmPJ7Hxms9ntdsfExBBCeJ5nJQFKFWzPWLRoEap6N23a1OFwoHoMK9AWHh4OstkyaFKpAqZBnMdwhVIKp0+ogMcyTpmGDRuCHhmR9UKgBjO5b2TUrQC6UX5+viiKH3/8MfZKVppFkiQSqOak1+uL2Tc9Hk94ePi//vUv3APNoWT2V+ZkA4fWY489ptfrQfrPyWofY8srwfOD/m4In4V8dUqpKlAaduPGjUOGDAE7C2J90Lkl0I3A5UMIkfsZQxtgyNwuMOcQQtLT0wcMGHDs2LHPPvvMYDDY7Xaz2YyNDdsYz/OVK1f++eefYfms6AGkJDA0qHWAK7CpqEMElivOsjUJITRQP4dxBBBC9Ho9Fur9JrmKnIejRo06c+bMkiVLzGZzXl4eZhqb/6IoVqpU6fjx49gJivFJSZIkCAJCuXmeR5R6yXSIyMhIFlyvUqmGDBkC6ruy8YiVEvbs2fPhhx9mZmZ6PB4wfskn7a2D0SzBWMgSZksbkiRlZGQ0bdq0YcOG/fr1Y/sKi2tB0kNERARU5DJoUqlCHitts9mwInQ6XajkFTg7mKal0+ksFgtIsPDrGo0GS0C+TyuggfpCrVq16tChQ+fOnaOioiIiIiBq8C9OaHFxcXl5ecU8ym63E0KMRmNubi6CdEuQCCVJEr6VlZVlNpshP4cNG4ZkFIwjmh169hkxRL5C2P3kFKter7djx47gZ0eP34lvAg4R5oIkAU05VD4aSZLQPDHgGjtx4kSHDh0WLlyYm5vbvHlzElAFRFH0+/14XykQleV2u8VAAGmF9hWSAOcbIAWi/kMb30ADTjF8lv8EpRSOfGwAarValBV1KXeU9vgGm4dfffXVhQsX4uPjMQ/lWSNsQbHlLOfZL2C7hrrA3gWmmhLUwJH3A9rp9/tbtWqVl5dXoNjIXTJwtwhCCKtQhKiGIkmobxGIoKeUYpPW6/Wl7aNJTk7W6/X79u379ddfCSETJkxg/S/3pp07d+7999/H5yLnSUXxFeIQIr+BUgodKFTyikEe/C79vSwMeHrZ0UjxFUqStH///qZNm/7888+HDx9WqVSIZKCBYi2s7s2FCxeGDx8erJ3Y5Vk0uiiKrG7V7UIURTmtvMfjsdlsIEVj98glZwmeX7j9tHhfIYssY6Y8SZLk15n6KQZyZSFKcH9mZma1atXq1q2Le8xmM+R7iX0TlFIcULAHI8U92P1oqtvtZpYSUux5nZ1C4Brbt29fhw4d5s6d+/zzz3fq1AnMsGLAS8i4vFkDjEZj2ZxTSxuItiaBDmQRPFyIQAI2W9RzxWfwiRNZtAoJMFFJAS2Wyk4C5dExJQRbMuylyD/NQynAJ240Gn/88ccOHTrMnz+/Z8+eL7/88owZMzD/IyMjYTaHsYR9ndGMBXu+3PQrSRLC70pgp4E5BCYrNiIjRow4duwYRhanRsa9B7A8htCyrhQPaPDyK1iqoigyyQZbxcGDB+fOncsmfFRUFONmK8HvSpJkNBqhhTMu+GLuz8zMJIRQStkxvfj7IYchcjEQn3/++ZgxY1JTUyVJGj9+/KZNm+Btxz04BuC7tOIwlhUPuUmDCS6dThdCecUgt5lhnWq1WnQpCEvRqzRAvyyKIlfe5FglA9ap0+n8R5s0og7kt0EuffPNN+++++7OnTstFsu77777/fffWywWqDjwRKlUKogyQRCKIQbC8xGNDrrUEgcI4RfxNEKIwWAIDw8fNGjQ+fPnCSGMjMPv9xdgZMWfCrwmW5v/uB8FHXuQcUEUQpdEmAjy+wRB8Pl8YNbJyspSq9WY65Ik+Xw+o9Ho9/svXrwI80+5AAkCIBmy2+0cx7lcrmL2EpfLxfwsixYtmjVr1uHDhyMjI/v27btkyZKOHTsSQtRqNeLA8NnlcjEPF6loe7+CsgFEg9lsxgxUqVQoyhHsfpRnIYSIorhw4cJZs2YdOnQoIiKiR48e3333XZcuXURRxHxD5aJynIdwphCZ0lyvXr0dO3aAfJlpfoh7hXyA983j8SD3rcT5QbcOaJAcxyEIA9Uj4LljTkAkRKvV6nPnztWrV0+tVrOTKKvmUdrtdLlclStXRkwFus7tdhdDtodJhYLrhBC73T5u3Lg//vhj//79hw4dGjVq1Lp165o0aVK5cmVQRlFK9Xo9TqrgNgMzUGm/l4IKB+zgoihaLBZGERDsZoPBgFp5zH/q9/tHjhx57NixEydOnDx5sn///uvWrXvggQdQC4TjOHnZHJj6yqvQECGkXbt2qampNpstOjpakiSXy4UDPyJkVCqVy+WKjo4mhCBCg5WnZGsThqtifiKojgW2U7w8FD3YFXQ6XVRUlFarNZlM48aNI4Sg+AAiD0wmk16vR6jHjRs3WrZsGbreuD0IgoBTiNVqhZoMFSqYTRjacXZ29tSpU1NSUnbu3On3+xs3brxw4cJHHnnkxo0bPXv2JISEh4dLkoRzIbPMQb01GAyKmqWgACRJwmaWm5vLQj5J8HmI/TU7OzshIeHMmTM7d+70+XwNGjRITExs2rTphQsXevXqhVMm/FCknOYh1heWGIpME0KaNm36yy+/4Aafz+d0OtesWWMymapXrx4VFcVxHKr7hYeHz5gx4+rVqyHMWQmGiIgIjuNiYmKioqL0ev1nn30G9Q5/xd6AcCVKaVJSErj3cGImfzesliogssExBvEC2qdg8wSvAOmfk5MzcODAWrVqLVy4cNOmTQkJCcnJydWqVcvKytLr9SaTCbosBBfHcWD9dbvdFc6+oqAM4Pf71Wo19AwY51DFpEg4nc6IiAhEYOfm5trt9sGDBzdo0ODf//73smXLPvnkk8OHD9etW/fkyZOo9OXz+bDqIR90Oh0Kw5TXyz766KMpKSkIFMvJyTGbzXv37lWr1TqdrkaNGhzHWSwWGObnzJmTlZVlMBhwtoRCCZH7DybhYPFY8NGwvOW8vDy4KpctW0YIMRqNZ86cwZ9wlqIycjAYe/r168dq1N85bjcOBt4Z9hW0rZg0bLfbnZqaOnz48I8//jg/P3/79u3t2rXDO7pcroyMjPfee4/KUqBZAir6usTtLC/cbe2829oTDLfbzgIkroijKiYk0ev1Yh5OmTIlPz9/8+bNbdq0OXv2LKXU7/enpaV9/PHH+Gk8JNg8pKUch1EgSon1icFgyM3NhWjGvw6Ho3Xr1oSQo0ePulyuGzduzJgxgxDSvXt3eSRE6WHDhg2Qb2AxyMjIkDebjYXD4ahcuTKL+8RFvGbJolRvt//REsSt3gorrMPh4Hn+8uXLjzzyyLJlyyilCxcufO2117KzsxFml5SUtHz5cnl7MBt9Pl9KSsq4ceNK1s7ywt3WzrutPcFwW+1kgZ5sN4feH+zhWOOQQseOHXvooYf2799PKV20aNHIkSM9Hg+oedauXbtv3z5ICZvN5nK5WHTgiRMnQJNbLv2J4wf7r8fjcTgcdrv9kUceIYSsX7+eUmq326dOnWqxWJo0acL6B2xNiDHAlWDxWEHz4DiOg7WQUspxHHyoUFrRpri4ONQlhMfQ7XbjTyinSghZu3btmjVrblWfDDVgD+A4juf5GTNmSJJkMBiKyV8QBOHw4cNvvvnmG2+8cerUqS+//HL58uUNGjSAXf3kyZNms5mVrkRVCkKIw+HQaDQsPbuiF8RVEHIwElev1wvdwmw2I6W5yPslSfrtt98GDx48aNCgkydPLlmyZNmyZQ0bNkQG39WrV3G+1Gq1qGNYXvMQyiUNFHGHM06n07333nspKSkdOnQghOTn51euXBm+AJPJdObMmebNm9eoUaNPnz4ff/xxcnLygQMHXn755VJtp8/nA4WB2WyGxZ0Zb0RRVKvVrBg2z/MxMTEgwmAuQshKdekXyoQJk9UkPnHixIYNG+x2e7DoVZVKlZWVVatWrenTpx8/frxVq1aJiYnHjx9ftmwZSoIgNRh2d3gYGKUwz/Mej6dKlSolLhOk4F4FrJ5msxmbOKqoTZ48OZi8YnJAr9fPnDnz5MmT9evXX7Ro0blz52bNmqVWq0HNg8o5cH+HhYUxzYnjOK/XWwJO8lABzlDEM0RERLDlALlapUoVLJ9XXnll5syZKSkpJ06ceOSRR6KiouQcAsWL3OK4BtB9jFIFRYLQ1z6fz+v1RkVFwWoNHyK+xX6sDARTMXA6nRaLxe12JyYmxsXFtW3bFtHBwUIcTCbT999/v2TJEpfLtW7duhUrVjRs2JAEdLVq1aplZGTAI2O1WmNiYvB8i8Vy/vx5uEThbijDV1RQAQCB5Xa7586dGxsb26FDB0mSjEZjsDVpMplq1aq1ZMkSj8cDYjbUx8S8rVev3h9//CGKolarRelPUAwUnof/YL6+Y4D6H1GPuAJKCI/H06hRI0KI2+2GdyA8PBwirFKlSnq9npGnu1wu3Fmq0Ov11apVMxgMLpfr+vXrcXFxsbGxEGJ+vx8D4ff7YTdC2CXcu0R2jC7tzkRjOI6Li4uz2+08z3/yySfvvfdeTExMsJ82mUygmrPb7QsXLgT31apVq1gUv1qttlgsmZmZKpVKr9fb7XZMDIvFYjQa//rrr+jo6GAbp4L7FvKSi1lZWZGRkc2aNZs9e3bjxo2LvN/v98fGxmZmZprN5qtXr86aNQub7BdffGGz2RAz6vF4Kleu7HQ65bEBsPcIgpCRkVGlSpUyeLUiodPp+vfvn5GRUbNmTcRgaDQavV7vdrujoqJSU1NVKlVsbKzT6eQ4TqvVgjaSBA5g8nz8YAiqY+GIw4qGSoHANJ7nUWwcAbxyjz62E/Al5uTklG/NUQzn8uXLr169OnXq1Bo1ahR/v9frHT169J9//vndd99t3LixXr16OJcjM7F69epff/319OnT8/PzDx06VLVqVbiiVSrVxYsXe/ToQQLnY0XNUiAHdPQVK1ZcvXo1ISGhdu3axd/v8XjGjBmTkpKyatWqDRs21K5dm61ESmnVqlW3b9/ucDiuX79++vRpBAcUOQ/LAFCqWOYU4qmffPLJzZs3Dx06FAGa0FciIyM5jgsLC8vLy1Or1StXrqxUqdK0adPq169fBu2E79JgMODnmDWaLVVEsGZkZODMLVetQAlbBsdFHKCRDzF16tTnnnuuQYMGcXFxwcQ3zniEkEmTJnXo0KFLly6rVq2C1piWlla9enW87+rVq5999tkrV654vV69Xp+Tk4Md9OjRoy+88AIO6woUyGGxWGCYiY2NHTFixPjx459++ulgoZPMzWUwGD799NPBgwe3a9furbfecrvdCBa8fv16rVq1wsLC1q1b16ZNG6/Xe+7cOVi4tVrtpUuX/vzzzwEDBpTtK/7/cLvdderUyc7OrlmzpsVikQJJoBEREXl5edB5UlNTv/nmG57nExMTW7RoAZGOgFRIhuKVnKB/YxoVWAl8Pp/FYnG5XHBAajSaSpUq8TwPGYoTktlsFkWR53mDwZCWlta4ceNy5OSklO7cuXPkyJFnzpzhef78+fNut9tgMARrkiiKsbGxPXr0WLly5fXr12NjY8PCwlh6lCAIr776apUqVdRqdd++fZ9//vnz589PnTpVpVK1bNly/fr1JGAOLdOXVHDXQ5KkH3/8ccSIEX/88QfP8xcvXkQt+mD2VJwLu3fvvnr16mvXrkVGRkZERDDLitPp7NevX1xcnEql6t+//yuvvHL06NHymoewgiDXWKfT4Y38fj8ydMDagOWTk5OjUqk6duwYGxubm5tbp06da9eulYzDpQRglewcDkdUVBTrHKa+gBEmKysLSdqQZix/s2x0LJzpw8LCZs6cefXqVexSqampwcKBDQbD9evX4dmcMWPGrFmzjh079uijj1JKwSvr8XhMJlNSUtL69evNZvPo0aObNWv2008/ffXVV2azOT4+/pNPPlFi3hUUBg3wcE6fPp3n+ccffzwtLS2Yrq/RaHw+n8lkysnJqVmz5osvvrhgwYJu3bo1btzY4/HodLpatWoRQmrVqrVmzZpVq1YRQhISEpo1a7ZkyZKtW7eGh4d37doVtH/lApPJBC8c+FywIkRR9Hq9Go2mT58+UCK7dOly5cqVOnXq4FuUUibAGQ9zMBQXj0UI0ev1cBEi29lsNiMLHaVOoqOj0QIcqiBzIaGioqLKprAMOoUFFiBxxuv1/vjjj7169UpISFizZo3NZkNqQGHKEwZBEHAEnDp16po1a8aPH//tt98+8MADhBC/37927VqTyXTp0iUUAB8xYsTChQu3bNnSqFGjP//8c/DgwRs3biybWhkK7k784zxcv379Lc5DpDePGzcuKSlpxIgRW7durVatmtFo5Dhu4cKF1apVO3v2rNPpHDhw4IQJE6ZPn7579+4qVaqcPXv21Vdf3bx5M4wxZQBGAiwXMZRSZt5mWzjItHbt2nXjxo2hQ4emp6cfOnSoQ4cO7CAHmY74oZBv/KCNgHONBGI0tVqt0WiETQvhSleuXHnyySdJQPSxw1jxAvR2Ad5wBHTiJ5hYz8/P/+6771auXNm7d+81a9ZkZ2fHxsYGY92D1RBJiDqd7plnnvnvf/87ffr0RYsW1axZE/EM06ZNmzRp0ltvvXXq1KmJEyf26NHj5MmTR48e1ev1u3fvnjJlyqeffqqoWfcnmJ+LEAKBw9iJOY7Lzs5esmTJ6tWrBw4cOGfOnAceeCBYKDObvTTACtalS5eZM2dCWEFEOJ3O119/HWQi58+fnzt37mOPPXblypUTJ07ExsZ+8MEHy5cvHzRoUBm9+d+B+PTo6Gi73Y54VqxNnU4nCMKmTZvcbvfrr7++d+/e1NRUuY4FLh7GlFkciuF5l7M/I4fF4/F89dVXKpVKq9WmpqZSWYYUi7eHU+Pq1asTJ068E4rkAiiGlxwXc3Nz/X6/2+1et25dy5YtJ0+efP78eUppdnY2visIArKXiwR78b/++otSevny5SZNmrAi0K1atQLps8vlOnHiBCHk7bffpoECqx988MGpU6fw6/dqftz91p5gKO15yNJt8vLyXC5XZmZm/fr1161bhx9VqVQ3b97EDSdPniSETJ8+HS3heX7s2LGouipPZizjPJ1169bt2LFDDFDzQ260atWKELJ161ZK6ciRIwkh3bp1Q8KRx+PBgY09oQSk88XA4/Hg9KxSqS5fvkwD84dVgbXZbMiKWrNmzYYNG0L400X2PyEkLCyMUorcdfwpNTV1+fLlarV6+fLlPM+D+5FS6nK5/lFeAYIgpKamHjlypFmzZklJSZTSlJSU1157DX8SBGHt2rWEkF9++QXzB1tmMe2s6Plx92F7giHYPCSEOJ1OeVUGu92elZUFwm2sVuzvyNEpBjzPo24HpVQQBLvdvmXLlvbt2+/du1cUxVOnTr399tus4MG7777LcdyRI0cgBu12Ow4M5dWfAwcOPHbsGD7bbDb8CrJ2jhw54nK5hg8fbrFYHnvssfz8fDCD4k2h8zCJXWT7afE87yhqRghxu92gI+M4Dp4LVsKJkdNv3boVnn48HWFbZRArajAYYDCLioo6dOhQ+/btX3nllcuXL0+fPv2JJ57gOK5GjRqxsbGIFwGTUJEIDw/HiDZu3JjjuAcffDAlJeXtt98eM2ZMbm7uuXPnEHciSRLsBHIeSMTM4mRc2u+r4O5EqOYhQpc4jqtTp47ZbK5bt+7ly5ffeuutIUOGuFwuSZIqVapECAFXHgnQDYOCy2azZWdnC4JQjsliMBeB65njOJAZVq9eXavVglRm+vTpHTp02LNnzwcffIB7UGEX1iav1xtau5HBYDCbzXBcQiGG4NJoNJ9//rnb7Q4PD2fLtmz6DewMOp0OMS5ffPHFww8//Oabb4qiOHjwYK1WW7t2bQS6RUZGBpsnRqNRr9fDLIoZVaNGjbZt2545c6Znz55LlizRaDSMQFKtVlutVkIIx3FVqlRxu93I5LoHatgrKBk4jrNYLGazGQZyhNMkJSXFxcWNGTNGr9f37t2b47jo6GiO42rWrBlsHjLodDqEhFsslpiYmF69eh09evSpp55CDVMEkiOEoE6dOpRSVIZG0kb5psd17949Pz/fbreLohgeHg6FDzbvmzdvajSaOXPmNGzY8ODBg9OmTYMQ4zguLS0Niwjlrov7gWB2LGideXl5uMFqteJICn4sOM6YRjlv3rwxY8ZYrVamG544cQJEPqFCMfYDuWoJ4FjscrlwQkXDRFHEAbFIiKLISHEopbDSOZ1O6Nrjx48fPny4zWZzOp1du3b97bffnnjiifXr1wuCsGrVqueezoZbPwAAIABJREFUe45SCjKwim6PUexYxaO056EgCHL2LFhV7XY7HpWQkJCQkJCXl4d4xz179rRr127jxo0ej2f79u09e/aEmsWaSsv8XLh582ZwNVFKc3JyKKVOpxOhmdu3b8eRd9++fYQQtVo9Y8YMGLFYz+BoGEJTliiKS5Yswbv/9ttvuMjz/BdffDFq1Cj8F1RA+/btW7p0aah+lwbvf71ej786HA6n01nYKMWaZLfbg00S2OGQeyQXXHAduFwur9f7wgsvfPXVVw6HY+/evSaTaevWrS1btvzrr7+uXr26ePFihR/rHmtPMBRjxwL3FSNjw+TBsmXIzMykAWtokfumKIqYil6vFwWFxUBlQPaENm3a7Nq1y+l0fvPNN+3bt9+/f3+XLl1SU1OtVuvYsWM//fTTYO0s7f7MyclJSkpatWoVFpHX63U6nR6PB5EMe/fupZQ6HI6ffvoJPTZjxgwqs13dvHmTdVcwO9Y/6FgApLbVakUmCwv4YIEXHMetXLlS3vSjR4+OHz8+hH0RbG+DwALkjII5OTnMFI+Lt+i4LNBlDNOmTWvRosVzzz23fft2Sml6evqYMWMIISNGjPj9999xD4j2K7SuoOhYxaPM5iErfSrfQSml06ZNa9q06QsvvADH1qVLlzAPp0yZcvHiRShzYqB+Ki1zmbV169Zt27b5fD60ZPXq1QgtIIRERka2a9eOUipJ0oQJEyBDLBYLFCymGtKQ6lhEVokZTH4M//73v3GPx+PBYWnLli2h+l1abE3uIuva4mJWVhYmmPzIWhiF1TK4F3AsxJVLly795z//MRgMb7755q+//kop/eGHH5577jmdTrdo0SLmFqnQuoKiY/0jimxnWFgYSrngT0xpwAev1wv9Xvw7Memt/BZ7X8g9HAb8fv+//vUvQsjEiRNhjFi7dm337t3bt28PrSVYO8ugPw8cOADHKCTt9u3bWZUwQshDDz2E2yZPnswM3rgZByH4SYO1nxajY7Fukl9xOBxF7nA8z2MYUBSMUpqSkvL++++HsDuKsR/grwXaiQ/5+fnoONYRwWoCQLWCzi5/fUrp1atX2RU8hO2a8l9kqkCF1hUUHat4lPY8tFqtkiQh+I8G1iOec+XKFXlLQI5HA5sxDK5er9dms7Gm0jKXWRs2bAA5stVqxfPluoLf74fMZT0jj8SCihDCxgAFlFQ5srKyGFH+6tWrd+3aFcLfDWY/YPUQaWBPcrlccFXQvw8HCmYUCfkPIcgPoJTevHkT84epXC6Xiyn98jiwYtpZUXQFRcf6RxRjx2K6Ptw4hQ+BsE7R4PKqgMmKTU72EJydcnJy2ELLz89HY/gASTqEQHn157Jly2B6Z4LC5/PJ3+vatWtUFsHJGsDcFCzUrHD7aTHxWMhn0ev1jPbJ7/dbLBaO4+x2O6t2jihvUOAQQrDa2RfLIGmFZTCx5HZBEIxGY35+PiEkIiICUWWUUq1W6/P5uCBATkFcXBy2LkKI1WpFwESdOnUQzeB2u0FTZjKZ7HY7GoDetFgsarU6WBKQgnseoZqHCICIiooCOzkJMCFRSuvWrYvF5ff7vV4vx3H4UQQFCoIAtsnw8PB/iA8oTYgBygOw44gB4gMcz9xuN6heLBYLVpDFYoE4JoSg/SQgfEIFliGI3oMog6SKjo5GuTQEZjGxVnpgZOt+vx/EhiqVymQyhYWFqVSqvLw8jCmmDeRtkcDTaEDpVwdgtVqrVKkSFRUlCALitBwOh8lkgsgC0TZYM+x2O4JOFNyHQBk+rVYLrQgFB8FpB35j3CYFeDGDzUOwEIsBMls2OTmOwyquVKlSRkZGTEwMK3UfERGBlQjJIAgC/GPlAkmSTCZT3bp1fT4f2oMCpqCkgUCoVasWCJ+dTieIfymlIH9GGRh9sTWtg+pAiKJl8V9cIEMbcWEmkwli0WQyqdVqeGExHkajkQaOViHuj6JAAyH2UELNZjM6CDofiwjWarU2m634vsjJySEBQe/3+2NiYrxeL8QxJoHJZIqKioJQQ5EKXlZz2+12K7V07luEcB4i8DEiIgIZN6AExOkQmf86nQ4h9iino9VqPR4PxCWimEMbNn5bQOYHFESoU+A0CQsLg+gghMCOwlgNEd+NfmNiJ1Tt4XkeWgvP83isIAioNoMB8vn+P/auO76p6n2f7NG06aTMAkVAmVqKZcmyIKtQZAgKFIFqgS8oAiIgIqAIIlNE9t5DZMiSvcpQCkjZZRQ6aNJm73F/fzzm/K4tLbSkDS15/uAT0uTm3HvOec87n9ciFouhgjidTnf9bn6AVwmTCBWKuJQ/k8mEaKZOp8OyceTPww6rmsPh0LgGcmJgK2LloPIArDp2u93Pzw83zuFwNBqNn59fwevQizIMGIFisRi8Klj5DocDBoBMJkPlilgsxjFX8NXYV6A1JTweDwsbHWloHQz+RWMDD1qDAIfDOX/+PA59aFQ0XVIsFkOZsVqtPj4+BoNBLpdTlhmojFKp9Jl6Tr46FrWSeTweeyuySTXomyKRiM/no70iIQQRQzYrf7ECxwxlHmJYjURw2EC2PpO/ChVbhEX5IxaLhUIhNa8BKtSQaEJ9dQVzKtKuI+x/CSFUrJtMJhwzxGXHY8rp+mYYhk1IiCULe9cLjwNeYoFAgC0KDwG18OC7ogzCmHRoTmxfAvaL3W4XCAQ4FAkhOTk5KMrDqqZ6PAqCsBSx4blcbolVttJhMy6WJmpZ2Ww2sVhM1y0VAtihxOWeyVVMJBKJ6LbiuK8emT6uXC9oxyH8F/Sk1BVNndluB3sSIZ2pTolVYTabMYmoxsKbRqOR+tigv+JbkLSEELVazefzQUmK50/dA1wul8fj4SdQG06eQxh6UTQwLgJPwOFwcLlcGOqEEKPRSE9J7CAq/6GdsKVB8S1CQlOFXK/RGou9JUUiERzkNJ8yP2DhUW2BXoQ69fGaXg0XpyesB4nKCSFOp9PX1xe2DWUnpt53aBQwAmETYvzszM5njt+dsTyYqlarVSQS2Ww2EHm58fqlGtQkZU8eIYTL5WLXoRgbBv2+ffsSEhJ8fHw6duy4YsUKWnctFovp+Q0aen9/f9R+euzGvCCEsOo/oGpQFgB4eTkcDg27Hz9+fNSoURwOp3Xr1osXL8YyePLkCXEdvQEBAWq1GvqKSqUKDAwsGZdwocC2tcCTDknN4XDgz8a9WK3Wl9a5i3AGThpfX1+xWHz27Fm6K+mwoQoX92DMZjO6zsNzQFxEEitWrPj44499fHx69eq1a9cuQohcLtfr9bD6QK+v1+spFwO6XAuFQqpO0ewQL0oANHaGx44pAKElcTGJQJ7z+fzNmze3b9+ew+E0bdr00KFDmDt4edFoUiwWo38lIcRkMnnc61P2gKQLKMH0GHUz7bC7LoSYrtPphAcrKCjo2rVr7rp42cBTrXOn0wkFGYFwhmFu377dq1ev+fPn//LLL3K5fNWqVSEhIdHR0U6nMyAgACfZyZMn9+zZo1AoZDLZd99950a734sXBFurwNmm0+l0Op1MJsOp2aZNm0WLFu3evdtsNi9evDgrK+vbb7+FD5XD4ahUKp1Ot3bt2rt370ZGRkZHR5tMpooVK3rwjp4KuFjYuUEAIuagjcaR8HIuTpx/VBGx2+2UlQofoAE4uMmLezxisTgnJwdJbEKh8MmTJ6GhodeuXfvss88mT5588ODBzMzMadOmORyO999/n8Y1pFLpgQMH9u/fbzQaGzRoEBcXl+uyUNdezikok+Dz+Sg+gF+WRi04HA46pqhUKsRzT58+vWjRovHjxw8ZMsRqtc6cOVMgEERHR8P7GBoa6nA4zp8///PPP4eGhoaHh8fExFSvXt3Dt1fm4OPjY7FYkA0JHYtS3rsL7vRjwU2KRLAqVaocOHDAmwPOBqQ24yoBg62DBBqGYdDfl8/nV69e/e23396+fbter9+1a9eRI0fq1q0rl8thqhoMhgcPHkybNq1cuXJt2rSpVatWq1ataIqiF54CTUCk4WMkUTqdzuDg4ODgYOqf+Oijj3bs2HH06NEtW7YQQrp162YymUwmk9FohIeyevXqarW6ffv2KpWqXr16ENkevbmng31y0zyMJ0+elC9fnmEYu91O2zK+hOPHaGloXqvVwl+IuaOOq5Ip3AEwABTWhIaGEkLCwsIiIyMTExMPHjy4a9cugUDQsWNHxJukUqlYLN63b9/UqVNr1aoVERFx69YtiALq7UA6M/HqWCUIeD1pBQx8Woj+y+VytVqNMFNaWtpbb71lNpvnzZt39erVc+fOPXjwoFmzZmazWa1WIzS8f//+jz76KC4urlatWjdu3Bg9erQ3XlEcuHz5MkLnmDW3Cyt3ig86OBpHKL5wcqkDFXNQrZAkgSivXq+/c+eOwWB4+PAhPrB79+7FixdXqFAhPj6eYRipVNq2bVsEYtavX3/nzp0qVaq0adMmLCwsPDw8MDAQuVleeBB0fumRDGPIbrdnZGQkJydDnzYYDOvXr585c2bLli379OmzZcuWmjVrxsfH+/r6hoWFrVq1Kisrq3v37r169apevXqTJk2GDRt27949j91VgaBOLIRFcON//fVX9erVkY/F5lV/2YDzD0agQCDw9/c3m82///47vSOHq+N1yehYtJdiamoqVo5arfbx8dmxY8eUKVOaNm06ZsyYvXv33r59u02bNkFBQbVr1z548KDBYGjRokX16tXbtGkTExPD5XLheINqxfy3UN+LEgCeOTsnCRFAFCg8fvw4MzOTEFKpUiU/P79jx45NmTLljTfe6Nu3761bt0B9GRAQ8N577129evXQoUMjR440mUxvvPFG//79d+7c+eDBA0/eW1mEzWY7ffo0OOjxjtud1u4UH7QQVCQSaTSarl270vpzL9gpETR/mRDy6NGjxo0b9+/fXyaT3bp1S61Wnz9/fvbs2atWrapdu3ZkZKTdbl+2bJlerz906NDixYsXLFig0+lg16rV6rNnz96/f5+WaHnhKaBmEBsVNbaEkIcPH3bp0iUmJiYqKmru3Lk+Pj6JiYkJCQm//fabXC5v06ZNcHDwypUrz5w5c+DAgREjRixevPjUqVNpaWmbN2/OzMw8efLkzz//XL16dc/2mngq2ClK7KBbYmIiO5sBSf2eTWvND0iSgx2I3BelUon/0igPUAL5WCKR6Nq1axUqVOjfv39QUNCpU6ckEgnU8T/++KNWrVpNmzaVSCTTp0+vV6/ezp07e/Xq1alTJ4lEcurUqYcPHx46dGjbtm1yuRxZsNSP4m35XPKgq506tAghly9fDg8P79mzZ61atQ4cOKBSqbZs2TJ69Ohjx45FRkbWr19fJBI1bNgwLCwsMTHx9ddfb9q0adeuXefNm3fv3j2j0bhq1aq4uLgaNWp47rbKJtRqNZhTCIu+zr0/4TbZR3NF0bZaIpG88847KSkplStXdtdPlD0YDIYxY8YMHz68bt26hw4deu+992JjY4VCYZMmTaRS6ciRI99+++0FCxZ89913W7ZssVqtVapUad++/c2bN//666/Vq1evW7euWbNmN27c8PZJ9DgQZkIsH04dq9W6Zs2a2rVrx8bGJiUljRs37sqVK9nZ2T179szIyFiyZIm/v//cuXPPnTs3adIkLpfbvn17mUyWnJy8devWXbt27dq1q27dumq1Wi6Xm81mD7YgfCqoCkJrcBwOh1Kp9Pf3pwQBxFUj6cmB5gO28oEUUrlc3qlTp/T09MqVK0NNwRlJK4KLFVartWvXrj///HNISMiSJUsWLVo0bNiwVq1aRUZGZmdn9+7de9SoUUOGDNm6deuOHTtkMtm7776rUChUKtXPP/+8bds2i8XSoUOH1q1bs1NJaJ01u9Tai+IGlhYqCvFOZmbmsGHDhg0bVr9+/SVLlmzbtm3o0KEffvhh48aNVSpVfHx89+7d+/Xrp9Fo+vbtiybEqHjds2fP7t27161b17Zt2wEDBnj0tsomsrKyunTpQmeKnT/nrp9wp30JUcXhcAwGA7re3r17t1WrVm78idILNm0gXhgMBpVKpVKpwsPDbTZbixYtGjRoUK5cuQEDBuh0OoFAMHv27IkTJ548eXLBggV9+vQZPXq0QCC4d+/exo0bMzIywsLCoqOjGzdu7G3s+jKATcmBHatQKDZs2DBv3jyRSBQVFWW321NSUqZOnWq32+vVq9e8efPly5efOnXqvffeGzhw4CeffBIaGnr+/Pldu3ZlZGT4+/v/73//a9iwoUAgAM2dh28vD+h6pjFBg8GQnp7esWNHHx8fcFARV/021cNeKmg0GrlcDuUVWmxUVFR6enpgYCDkGNxXJeMK2rdvX4cOHXx9fU0mU69evTZs2BAfH1+vXj2BQBASEjJz5syPPvooNjY2Pj7+s88+69evn0KhCAoK6t+//6VLlyIiIho0aFC7dm32UUGjt+wqfS9KBpSikxCSk5Nz586dihUrtm3bVqFQtGvXbvfu3f3793/33XctFkvVqlUbNWr02Wef9e7du2vXrt98802HDh0MBkPlypUbN26clJT0+uuv9+rVq3bt2vCOex2T7sW9e/eaN29OWALc7ZLKbRNGlxQYLxiG6dGjx8aNG/FXmvwOnoJn7nmk0NKWEQ6Hg14fjKsw10rRgqNTSM16Hx+fP/74o0WLFjD99+zZ06ZNm5CQEKqh6nS6Ro0aORyOgQMHnjp1qlWrVjVq1NiyZcv58+djYmKSk5MnTJgwffp09wYy6CKj1hiNf6F7ACEE8+IliWYDZEVYoiqVihBy4cKFdu3a+fj4aDSa8+fPJyQk1KxZkxAilUpB8A0HVVxc3MWLF5s3b16jRo01a9ZcvXo1PDz82rVr06ZN+/TTTzUajVQqdSMLGtsfxm4EVASgtA0XBE/60aNHY2NjlUol6D3NZjN7Lb1UcDqdSHRFhrhYLNbpdK1atTpx4oSPjw+mMlfEsLBgZ0qwE6SeisTExEqVKvH5/KCgoE2bNgmFQtBJgAS/YsWKderUsVqtS5cuXbZsWe3atVu3bj1z5sz58+f369fv9u3bH3zwwcyZM+lv0eR9CN4ynBeLwBwKiWgsBX9CawFkCZcAgz8AVkicVujTFxgYuHbt2i5duoBZY+/evXK5vEqVKtT2sNvttWvXzs7OXrNmzaJFi+rVqxcdHf3111+vWbNm0KBBt2/f7tat26pVq3CqlsxdvDxgGEYmk2EBY67puYPJfc50T7oMsBJoOcjWrVtRKQLfFUik3XsL7pR9dHHDCjQajShIRvEzuwq6ALEOpQpUZsjyhrCjjxKlrXDevITG8TNhtVoxixqN5u7duyEhIVqt9uLFizdu3IiJidm/f//du3eh0Fy6dCkpKalq1aq+vr6RkZEdO3YcPHiwWCweNWrUxIkT27VrN378+KNHjx46dCgjI8ONI0QDJpw9mAjMCIfDQdyaTd7oBQDPDXIng4ODkUrZuHFji8Xy+PHjJUuWjB49GgVEAoHA4XBcuXLlwoULDRs2zM7OjoyM7NOnT1xcXPny5b///vtZs2b17Nmzb9++drt9/vz5hBCUlLoFlEwcnLewXopQ/0t7wqA3KviOf/vtt8qVK9POGDRJqwTymQoLOiQqQ6RSaYUKFX788Uc0nCGE4Iws2vXpc6YiWyQSFZA3+eOPP7777rv+/v4HDhwQCAR9+/bdsWPH48ePZTIZl8s9cOBAtWrVKleurNVqO3ToMHDgwPfffz8jI2PFihXTp0/v0KHDnDlz1q9ff/jwYbivKJsxbrMM5xJg6XJcTV1Q0Io/oSAfp2mJ1beyOUVBzpmZmfn777+XL19ep9Pt37+/cuXKcXFx8+bNu3XrFnpkXb58+fr169WrV+fz+Y0aNRo8eHBsbKxCoVi+fPnEiRORlTVp0qSzZ8+WxvPuxaHX66llaDAY6LkDHzmt1i/4ItAf0FSKEAIN2GAwHDx4EE0CjUajyWSCDHfvOnFnrBCV6pSm2c/Pr127duvWrfvss88IIVDD8UTAiP3Ui+R1h+Lx4esoCKcZEi+h7M4P9LCh+0QoFIaHhzudToT/9u/fHxQUNH/+/FWrVg0fPpwQ8t577y1fvrxq1aqo6t+zZ0+7du00Gk3FihWrV6+OBsBjxoxBObe7xikUCulJgJ6dyE3BlJnNZmQxBwQEvITOiZcBWJkCgaBevXp///13enr6oUOHli9fHhwcvGvXrm+++WbixIkCgaBZs2br1q3z9fVFy6aTJ0/6+fnVqFFDo9G89tprCoWievXqAwcOHDZsWOvWrVu2bOnGEcK/gnPXYrE4nc4iqMvsUBRUwGvXroWEhFSsWJHm/D61J8RLAjYPFmQOj8cLDg4eNWrU5cuXW7VqBYYqiB3ahPH5IRaLzWYzl8vFbsIWBhvtUz8/ePDgixcvXrhw4dq1a0eOHAEV/tdff33t2jUOhzNs2LCtW7fisklJSenp6dHR0VeuXOnUqROHwwFz6YQJEyZOnBgVFQUHGM0peTkDte4Cu5sI5ggqJp/PR2cFnBeExadfrKBRcjRx4nA4fn5+0dHRSqXy0qVLx48fP3LkSEhIiMVi+fLLL9PT0w0Gw9ixY3fu3In9ePfuXYPBUL169TNnzjRr1qxBgwYpKSkcDuenn34aOXLkX3/9Vdzjf9ng6+uL4AmqgOnZlHdVF5xEBTcNVT/QKuf333//5JNPaHgB6QEQCG68Bbcdk7Q8h74jkUh69eq1c+dOdjcYQgj0pHwHxOqkgb4QIDvQ6/Vg6aQ/IZPJXkLenfwAAnfi8gxZLBaJRNK9e/djx46dO3du9+7d5cuXJ4RERUV9++23t2/ftlgsBw4cqFChAo/HW7JkSadOne7cudOkSZPNmzerVCq73Q5WaD6fP2HCBBqTfXFYrVaTyYQQoUAgEIvFHA4H+WGEEHR2g4FuMBi8daMUsJixtkFX9vHHHxsMhosXL65cufLtt9/29/evXLnypEmTUlNT09PTjx8/Xrly5aCgoNWrV4eGhu7YsWPq1KmTJk3SaDRisdjf399isRgMhkGDBiUnJ7uR31kkEtFCZXROhF+tsNdhkzARQhiGWbx48YgRI8BNitwRyCyqjb1UYIsgMCfjRnr16oWgG82BQx/rwl6fEnwTQmAoF9y+ZurUqWvWrGnUqNHx48dBpNStW7fffvvtwYMHTqdz9uzZdrvdbDbHx8dPmjTpyJEjYrEYGnxISIjRaORyuSEhITVq1ADzM/UgogKjFNmihQXWIaaPZkOC/xPNeiG4cjWtKlbgvEMwx2q1SqXSGTNmLFy4MDg4eO/evSEhIUqlMjY29o8//rhw4QLDMDNmzKhVq5ZQKPzoo48+//zzNWvWtG/fHupXTk4OIaRGjRoymSwoKOiPP/4omVt4eaDT6Xx9fTMyMth0x1jSmHH8S8s78rsOOr3m8l5v3rw5Pj6ew+FoNBp49KFyuJfn3W2yj4oh6JtY+pUqVapXr962bdugDKF1NHI18rsOfY54HKDl5HK5MpkM7VSdTmdKSgohpHQRb0J3pHo0VNLy5ctv37597dq1ISEhVqtVp9P17NmzWrVqAwcOlEgkkyZNkkgkKSkp27ZtmzZt2rvvvrt48eKtW7eGhYWtW7cuPT09ODg4NDRULpe70b5BV29ogZSzByFCtVptsVhobodIJPJ2PaOALwQORZlMBu3z22+//e233ypUqCCRSAwGQ7du3d58882BAweGhISMGzfOZrOp1eoJEyZs2LChT58+P/74499//x0eHj5v3jwej5eenh4WFubr6/vXX3+5cc9bLBaTybR//36o6ZStrbDXofVrUKfS0tKuX78eFRVFXC4u6jx4aQ942j0DEgbjrFSpklgsPnnypK+vr8FgMJlMRSNGAatwdnY2RPaFCxcKNkgCAwP/+uuvIUOGyOVyHo/38OHDhg0bdu/evW3bthwOJzk5mcfj7dy502q1fvHFFz169Pj1118PHjx48+bNLVu2VK9eHdNaq1atu3fvwu5ne3de2il4caA/Onzt6N4IFQcpjBwOR6/X5+Tk8Hi8kgmYQqmF1ktcboWKFSsmJiaOGjUqLCyMEJKWllatWrWIiIjPPvuMw+EsXryYELJhwwYfH5+vv/46NjZ28uTJo0ePvn379pEjR4KCglJTU8uXL//WW2+9gvlYhBCn09moUaPz58+jAIgQolar4W2hbYvwyYJtReryRHfj7du3+/r6+vn5cblcPz8/2sbU7c5ON+tYcIajYRbWOtiA0tPTCSESiQQfK/hZUAJA6uYlhGi1WqQPT5kypV69eoTVwbFUAMolcd07zbfVarWEEDyxrVu3KpXKEydOzJo1a+/evd99992lS5cuXLjQsmVLqVRqs9lkMtkbb7zx3XffBQYGjh49eu/evQsWLJgyZUrt2rXdNU7k6BBChEJhLhMQdiFC42hM6a4fLTPw8/OjbO96vb5KlSq0M8O+ffsMBsPOnTuHDx9+7NixH3/8MTs7+86dO506dSpfvrxKpcrMzOzRo0ePHj1atmzZs2fPO3furFmz5qeffnLj5BJXaK9Lly59+/Y1Go1IsytaLA8nNxKGJk6cOHLkSCqnXkLHVV6wG/nx+XxsT39//8mTJ//www9ocoWDuQg6CjZRQECA3W7/8MMPo6Ki0MM7v89jW/F4PKPRqNVqN27c2LZt27lz5y5evHj16tVRUVFarfb8+fM9evSA8fP666+fOXNmxYoVXC43Pj7+1KlTly5dmjJlSqVKlaAUIuMEF38JY7XugtVqNZvNcL2j0zByf/39/Z1Op8lkkslkgYGBcKmWAFezzWbjcDjUKMILJCLjwLLZbIsXL+7Ro8fp06dHjhz566+/gsDdbrdXqFBBKpUGBQVVrVo1Li7uyy+/fPTo0ZQpU44ePbpy5cpFixa9gnED9PNVKBQtWrQYNGgQdAPakhm2XC6/TAGXonLJZrNt3761aXR2AAAgAElEQVQ9Pj4el8rFIO3m+BhbDYQLinkBmEwm+hqJ2wzDTJ8+fc6cOUjdZxgG0agCLgJ/HbuoEJ+fN29elSpV/Pz8CCv0DlHCsNpw0i96BOxySFrWjilEiRD+BZCampmZicFfv369V69ehJAuXbrweLxp06YxDHP//v2IiIjff/999+7dZ86cqVOnTlpamlqtNhgMmzdvXrhw4cWLF9PS0tw1TlqdgD/Z7fbTp0/PnTu3cePGX331lUqloivEaDTSKX5x5DeeUjS/dJyYVqPRaDabsewR6X7vvfdEIlFMTAwhZNy4cXCTvPPOO+vWrTt27Niff/7ZrFmzK1euaDQap9O5c+fOlStXnjx5Uq1W071T2KEyrB1N/nvWQh/67LPPMLzCXhyLxGw2Mwxz6tSpDh065P0MewsXYfzFivxW0ZMnTxiG+fTTTzds2MAwjNVqNRqNRf6VKVOm+Pv7Y5FAcLGZ8TE19OEoFAr6xf379zds2LB27dpvvfVW/fr1f//9d4Zhfvvtt86dOycmJp47d279+vUNGjTAh//5558dO3asXbv2wIEDOp2OfY+QNgU//6eukwLG6Sk8dZxQgtkfe/To0eXLl8eMGfPee++NHTtWo9HgfbfLjfz2F+Uky3XMUcl/+vTpiIgIQkhsbCwhZNGiRQzD7NmzJyYmZseOHRcvXly/fj29qVu3bm3btm3lypWJiYluHGdpmV9CiEwme/vtt9GMyMfHZ8WKFQzD5OTk0NOHfVo9z8WVSuWqVau+/fZb/Ck7OxvpNy94nD11/AzDuFnHArDPIZtwwYiIiOTkZIhyyOUCgHIeGm3VarXg3WFHTHx9fQUCAazwUnEGw89JP2a1Wg0GA8MwVARg/BaLRaPRXL9+XavVMgyTlZVlMpmWLVvWunXrkSNH1qhRY+7cubgyfYw40d01TuSKJicnT5069e2332aX+g8bNoxhGKfTiTl1o4JVwHhKy/xi+5hMJpooQIE38Vqn0928eRNN5XCFBQsWREZG9u/fv1OnTuPHj6dfwV9fZD/mJ7P8/f1R+ocqmwoVKpw4caJoP+F0OjMzMyMjI1NTU7GSETpE8hBdIe5dKm4BW0DTR0TXGPoX3b59O9eHnx9nz56tVq0a9Cq2UvvMs43qu06n8/Hjx7du3cJSgb4+fPjw6OjogQMHEkJOnz7N/jC9C8gQvH6msGVK/xmMOG9SUtLkyZNbtGhBy1o5HM6IESPwYRRKl8B44AjBbyEmAJc//RgVBampqf/88w9eY7/HxcW1adNm0KBB9evXP3HiBBwN9NRga88vPs7SMr9wptB+YgKBQCKRNGzY8ObNm3SRW63W5xk8NTlu3LjRrFkzhUKBuaALg74omlmVn47FAR0tfeI0pOVGMAxz7dq1vn37nj9/nhAC+hmhUAiPHH4Od8tORMVIrFbr6NGjFy5cSFeG0+mMi4tbs2YNzXFjXH03RSIRyKXcO/4iQKvV+vr6opJFIBAgbFqEcaLLFY/Hu3v3bnp6emhoqHsjR3nHib7x+Ctqowghcrlco9EMGDBg3LhxQqHQaDSiXgaLEq4v4iqspXQmhYLD4QgMDFSr1XhKTZs2hf+flNH5RUo49t3Dhw/T0tICAgJee+019zJiYNtjCbE5UFauXDlmzBhk1EokEpPJ1Lp16x07dgQGBhJCrFYrUpSg5EmlUloDQfcsYqB2u33kyJGNGzeOi4tjioG+r+QB0WQ0Gg8fPrxu3bq1a9diimnMl7jycPGa1kEjZ5bL5T58+HDMmDG7du2itT7ly5f/8MMP58yZgw9gUthTU/CQGJY+odPpUlNTMzMzmzZtiqI5GpmFZC9alDbvOqFvPv84SwBPXc84UPC08Q60LqFQ2L9//zFjxohEIhzDIpGI1qaYTCZkxNvtdplMBuLG5wdavJvNZtDXcbncyMhIq9WKARTqudF6N71en5qaarFYUGJc5Kf01J8opfNLVdJRo0bNmzePEEJPqE8//XTGjBlUDtMqWlyKHUMkroeMJjQREREzZ85s165ddnZ2UFCQuwYPUUA1KKqulISOBWzYsOH48eOLFi3S6XQQ5WyhQI8cOK58fX15PF5SUlK/fv1u3LiBzaDT6QICAhiGuXjx4ltvvaXX65H1pVKpaJ4QUHx38Uzo9XrkjGNS4cFCkL5Q44Ssp9sPmVvBwcHFPU5CSLly5eCJJS6ZRQgJCAioVq1a586dcZYQF9EXHV7lypURPdTr9UgqLNR47HY7pJWPj09wcPDYsWMJIf7+/hqNpkzOL2Ep0LCckJPnrnGazWbatw4aEkpEHQ6HRCIxGo1vvPHGzZs3oWARQvz8/LRa7c6dOzt27EgTSnCPOTk52LBKpTI4OBjMokg427hx48GDB3fs2PGS6L5uAVWhVq1adfDgwc2bNxNXZTibcx+PhVqxSKU6e/Zsr169eDxednY2dCCbzRYSEvL48WM/Pz/sF4ZhQOuAvQPzuoBeSbmUJ3weSvCL573lt07wu4UaZ7Eiv3HCP4H9yOFwAgMDdTqd1WqVy+V6vb5u3brt2rUzGo0+Pj7o84Yr4FLQd/V6PfK3Cjse/CiY4apWrTpixAhI7EI9N5y5VFoaDAYul+vG9PzSPr+QkGazGSlZt2/fJoRwOByxWGyxWCpUqLB3794333yTEJKZmRkcHJxL3oKXgMvlQqClpKTMnDkzIiKif//+bpdX+elYxRIrZAMeTrvdrtFoli9fPnbsWGjrT/0waMHwGgpsp06daIIbhj5x4kQ6YIlE4uvrS7+rUqnwGk/WI2BYrlfqEPLx8SnsOGmklc4ITa0o7nFOnDixVatWEDpCoZCu2qFDh2IkKpUqb8AOiSxwehchtsK+oMlk8vf3x++W1fnFhOYK5aDSzV3jNJvNBoMBkXcYx4QQHo/n5+enVCqvXLlCJTvCK2+++WZKSgodjIPlfqdueQSnUMU2ffr0AQMG0Ckr7Iy/nMDkouTTYDD873//+/HHH2mCHcMwSqXSZrNRMQXPK43Xp6SkQOJDDcVhdubMGajmeNpQtqxWK8Qd1sAzJzRX4JidNElRhPXD5LNOijzOYkIB4xSJRF9++WXLli2piQJFQSQS4bDINbkMw4CA111rxmQyqVQqqnwX6rk5XWTldAbxogSeW2mZX+JKdV20aBHUACiFEomka9euTqczJyeHzoXD4TCbzXknFxXxkydPRkE3fu5FUi3zwlGS+Vi5QAPJDodj6tSpSOuhT4Edq6ZJKtnZ2XgnOTm5bt26dOegtwzz3xLWvOe920Pvzw+kztEzia1WF2qc9MN5o8XFPU6j0Qhhffr06VmzZrVs2RJHBdWxMAWo08EuZacc4fgp7B4zmUxI5WEYJhdDSZmc37xqKKSte0fLBgaJB8swjMViSUhIwJs8Hi88PPy3337DwLD7HA6H3W6H6swwjFKp1Ol0VL+cO3fuJ598wrAytemeLe2A2gQ8efJk6NChEyZMoAk97DmCcM+lXx47dqxKlSo4BsRiMVy/jIuZlkra54SDpUg5HA6bzVbcyW1s0VqsP/SCoEvXz89PpVKZTCadTnfmzJlJkya1bdsWz/9///sfdqVer8eJk+s8VqvVOGiLoBbA0qBXYzcVLcLtsHOuS0AOlJb5JS76caiAnTt35rgYzoODg+/fvw/5Y7FYcqkuEKd2ux3tcRiGiYmJ6dGjB8qPimPAHtOxsIJh/tpsNo1Gs3fv3vLly9MUeIZhEB2HQk1lSnZ29o4dO3x8fNavXx8XF4ez4fz58/g8NQrhe6c7R6vVejbBlmYr212NOOAkKOw4i3u/FTxOh4v3lWoJp06dOnz4sMlkoro/1ZIhxejJxE6/LRoQRINDuKzOL3ufF8cdqdVqyBeDwYCpIYSA57ZcuXIGgwEzC6Pwrbfe2rp1a2Rk5NSpU6mHhj2J8Mxhom/fvh0fHw8PAcwnRx6HSqkGPTjpwp4+ffr7779/5coVanA7nc6srCz6FafTCVm/bNmyBg0abNmy5Z133oGAunfvnsFgQGsEOgW4rMFgwEJia3W5kDef96n7C5UWRZiI/NZJYcdZ3ChgPbN1BVRagN5i//79x48fZ/JUsjscDqouY+uhwKjIMJlMoODh8/mFfW50ythz58aynjIwv35+fvRjZ86cgS776aefLl269M0339y4cSNEFrgzUS0Hhk58xWKx3LhxIyoqavny5exfdKMjE/CYjoVThx1AYRjm6tWrUVFRX3/9dVJSUq4DBpvh+PHjXbp0SUhIuHXrFsMwVqt1wIABffr0YRgmOzubhjxhF+L6RqPR4XDQ3VIEu8QtoNuD8g4HBATQNn/PP06qcTpYe8+NMbICxul0OnNtNigET63wt1gs1JOB9U3DhYUCW3tj/tsVoUzOL+OqczGbzQ5W7bG7xkmfJH2NPQ9bBVLJbrevXbs2MjLy+vXr+Mz8+fPr1q27b98+7FatVutwOOj8ZmRkbN++nRCya9cujUZDCVbY1U+lHVi9BoMBbjncoN1u37lzZ/v27adPn448B8wvLQ22Wq0nT54MDw8fP348tsmtW7caNWo0f/58emVCiK+vL7ueK9c05QcY4vRpO1zUNsW6ToowzmJFfuPEena66KnZn9TpdDh96GnqcPGq4MMMS58urLxCEyr2yoGwKuxzM5vN0G+e58NufG6lZX7ZOjSsmvHjxw8ePBiRBIPB8PXXX7/77rtHjx5l/gvYwLdv3/7pp5/atGmTmJhoMpnoEZaRkcG4Gw4PxgopsKBNJhMipqtWrYqNje3fv//27dsfPnz4zz//bN++fcmSJa+//nq/fv2Sk5PpQaXVatmRCAeLwlQulyuVSrxfTGG1wgIaic1mo81lizbOvNaMw60Og6eOE3lFuQZG3a3sb+Xk5Gzfvp3D4aA0A+mKCC29yPM3Go1IvccFy/D8Mv+dYofD4UaHFlrx0AsqlUpq9MP3zjDM48ePHQ5Heno64yKCQm7pnDlzCCFfffXViRMnHj58ePv27aNHjw4fPlwkEq1du/b+/ftMnjrnXEQkpRq5pi8nJweiOTU1df78+WKxeMaMGevXr7979+6lS5c2bNiwdevWevXqffnll1evXsUXIawyMzPZtgo9MMqXL0+Xit1uh5frmaPKywniFhSwToo2zmJCfuNEwVYuy8dkMlF3LONanAzD2O32uLg4ZGvBLurWrRtTVJeG0+lEQ3T8l7g6tLzIcyuaM7IAlPb5xWQxLMe/2Wx+8uQJhqrRaIxGY0pKyjfffFOnTp158+bt3r37zp07sAb79OnTsWPHxYsXs+US1arZfmi3ID8dq9jrChlX3Qf+m6v+yOFw3Llz58yZM9evXxcKhRUrVnzttdcaN24cHByMkaCvJyUc12q1yMVm+znALID7QZ0Uu1FoyUMkEoEYHWVxDMOgaKtQ46R/ohQVtJC1uMcJBgfiaryFh486R/pd1F5ZrValUlmpUiVfX9+4uLiff/7ZbDajH21hB4OSNx6Phx9CvjAGUCbnFwABDC3yp1d+ceCnaZ2zVCpF3z0c1SaTiRYQoZ6ZEELpGIC0tLTdu3dnZmZiY0ZFRdWuXRuFkwzDcDgcdg0Uu+CuDACuBVqJyePxIP3RFj0xMfHu3btKpTIzM7NmzZpRUVHBwcGVKlUC6QMtE6F0Dw6HQ6VShYeH44nZ7XZMB3EVohd2PXOe1pqNyvBC3Wl+64S4Sn1fZJxuRH7jtNvtAoHAZDLhiMm1Do1Go0QiYRiGFnwxDHPu3LnmzZszDHPw4MHo6Gi0Syrsc6M5o5S8g8Ph+Pj4oF/c8z83WrhNuzmx33xxlIH5DQoKunfvnkQiEQgECoUiJCQEX3G4iHIIIUgOPnfuXEpKyoMHD2w2W/369aOioipXriyRSHBxEJ0QQlBNKRQK2SQsLw6P1RW6C9BbqZ5LG9oXNyhdAiaDcuS4Cyghpq08AMr2gQ/gGEP9PD5GP1wcfUsYlrLPTsl6KtCXmsfjzZw5k84OZXt7+efXXeC4KKdpPxnC8slTgGyJ/HeKSR42l+IDFjBTVJ49L4oAPGpSDNLjqUC5OzVrqc5BbST6DhUd7B6yJbMO3YUirOezZ8/iZvfu3Yt3HA4HfF20AMvhqvx9CeeXsIon6HFeMr+LbkWEENr/EaaaQCDAvVMmCDf+Iikl8io/P5ZnSIaKAKFQCLvQYDBIJBL4VAMCApBsWHxQKpVSqdRiseh0OplMptfrQ0JCFAqFu1YS7BUnq9ESDBo/Pz+NRgMWUIZhZDJZTk4O1GRCiEgk8vHx0el0+DrcD24ZDzQbsVis1+t5PN4zFR2Hi8KAYVFQOhwOgUCAyg7KtAQSy/yu46n5dRccDgeIpiDvMKFQoKk7kM/n4zOwqyi/EcMwzv82hC8+UD8ZbGX4LD1FNla2AXcg2lHgHconV3xwuFgcYU8jQRAeVvQ8ZVxkbNiSfn5+KJkkrCYwpPjXobvgrvUM7yx1KiO8UzDJn0fmVy6X5+TkgJbvqbK3mAAPLjjGcNygF6ReryeuiAdxyT2JROIuf1gZkFelZqyMy9nm4+NjMpl8fX11Ol0R+MQLC9DIQpXR6/UymUyhUFDmxhcH7AC6XeH3ZhhGo9Hw+Xyz2czhcCjZOvQthGnwdfix3BVgIoRotdo7d+7UrFlTJpNhw2Bl58dNR41j9rqndhXd+RxXl4n84Kn5dRcog6vVamWHcmw2m1wuBzsLlUSQjM7/djDNFTcsPoSFhalUKpS5ITjLJjH3wi2wWCwojCWE8Hg8lUoVFhaWmprqrhhQfoB/VCKRaLVaRC3NZjN6CavVarFYLJFI0HUUij7bhuFyuVD3ESEt1nG6EW5Zz1wuV6vV+vn50QggYTGw54Wn5he9GXJpMG48j/KDw8WTLhQKfX19lUolVbDwAsT3VqvV7YMp7fKqNOlYtEWJRCLJysqitLDFiqysrODgYC6Xa7VaExISWrVq9cEHH7iRDBerlhBCHRtqtTozM/Ott97y9fXVarVhYWFr1qx55513wLKtUqn8/f2pEkOzAdwFGk62WCx4wTBMAffLjlri67gL6KO5rsn8t2CQDU/NrxuBwd+/f3/jxo3Lly9v3rw5WkwGBQWhsyHtZZaLz93hKlckLJ21mMAwjEKhCAgIQKkadfJ74V7gqdKHHBAQcPHixZCQkOIO68BTJRQKsRohH2jKHTQAGDx6vV6v169du/bChQu+vr4ff/xxy5YtzWaz2WwuLOm5B+HG9cz22dNuM/npWJ6aX4BhmEePHoWFhX300UcVK1acMGECZeouJigUigoVKhBC7Ha7UqlcunRp3759IeGhACHugR4SbkzNLAPyquR66bwgICzADgAOZfBeFnemLZ5Menr67Nmzq1Sp8vnnnxNCoOi45fq5+DYBrVbbpUuXpKQkGApcLrd79+4rVqxA+h5xdTUhhNCVzbgpF95kMiE5A/nRDofDYrFQbSkv0KeCEDJr1qwvvvgCCkQu3agA1YrCU/PrLtB5TEtLq1ixotPpvHbt2s6dO2fPnj1u3LiePXuGhIT4+/tzuVyz2SwSiaB15d1r7prH/IBKAiT/0mkqXXZhqQD7kcIzBEd1CcyvxWIRCoXYcWlpaZUqVSKEqFQqX19fbGqDwfD7779v2rTJ19e3S5cu77zzTrVq1QwGg8lkoumnxT1Od6EI6zkxMbFFixZOp3Pv3r2dO3cmhKAQhM/nI0ceZ2LBBp6n5tdkMiEfXCqVrlq16uTJk7NmzQoODi7u32UY5vvvv58zZ45arSaEhIWFyeXyX3/99e2336YRVXjXoN+7S26XInlV6nPeqa3PLsotAVitVoPBMHz48OnTpzOuZHA3cgewOe7ZpbPZ2dmEED6fHxoaisNbLBb/8MMPmZmZDMOYzWZ2G3b3zprNZgOBO/6LEuX8PqzVapHnOGvWrFy15QaDgU0TVfBD89T8FhPoA0xPTz9w4EB8fHyHDh1mz5798OFDhkXZAAIIvV4PxooSGx592igyKA5SgFccuR6vewvyCwAi+1qtls1fAJqox48fnzhx4uOPP46MjJw3b969e/dQFZuL5jQvZfbLj0Kt56fmvDMuGQWr8pksKp6aX8ZFOrBixYqOHTvinZLpZKVQKGrWrMlucUMIGThwoMlkUqvVGIPFYikOApdSIa9eCn6sF4HdbmeP7Zn1bu6C0WhMSEhYunQp42KYLCYSEdwge6+OGzdOKpVCHPD5fBQ2jhgxgrK50o5LblxzbBXnqY2fcgF1hVwu94cffqBvajSaI0eO3Lp1i5Ka5CKaywtPza8bgfphME3jHbwA63RWVtby5cs7duz4wQcfrF27Njs7m32D9qe1nysO0InACw8y4rwKsFgsuR54if003U3//PNPcnLy119/HRoaOmbMmPT09JycnKdKDLurmVUpQhHW81N1rJycnI0bN9LP4FE8c75Kfn6hynz//fcJCQlMCTLSQZLv3LlTLpcTQvz8/AQCgY+Pj0wma926NT5D+eHc2KymFMmrUl9XiKo0ZPmA0SRvRV5xYM2aNTk5OfHx8WazWSAQ0LQGN8bd4ZqmgX/ckdVqnTFjxqZNm1JTU0UiEc7vRo0aDR061N/fX6vVSiQSkUiE7EI6qS8O+HgRyaL+2LwpRBTI1ieEGI1GfMxgMDx+/HjVqlXx8fG1atUiLjItwkrMygtPza+7gFJKBFXNZjMYlRD9BNdXQEDA4MGDBw8enJyc/Oeff7Zr165JkyadO3du0qRJYGAgyM9K4GbhzaahEPj5yxi71csAPFLaFpoQgmhUcU8xzb5yOp1Xrlw5duzYqFGjPv300yFDhgwdOrRixYoIVXM4HLVa7e/vz7ji+DabLRf9d7GO011w13r++++/Fy5cGBsbi/pfmo+VX8TQU/MrEon27t17/vz59evXOxwOeJVKYL6EQqHT6ezWrVvLli0PHDiAUgmbzdagQYOVK1fiM/Xr10dWMQ4st/xuWZBXpcWPVQKgPUMoFceePXs++ugj+GbZGnTJuFjsdvvRo0eJi/CmSpUqjx8/BuU07YXClFS3OHue/omPHz8+dOhQfutKJBIhLoZnhadXTM04SyMMBsO5c+emTZtGCPnxxx9hUtNGLvgMu9EHzDjEFulF0C8CyhnttcIUQysuL0oG6KCF1zS8Dj8o29OJFYJZZju/UbsKC6dx48bR0dFHjhzxLH/3y4YPPvggNDQUMgocs1RkDRkyBD0PANpvxyNgH8SYX0iD8+fPN2nS5MGDByU8Hhqke/DggVwuh65ToUIFEH6yP0PJxl41eIznvbQAJSRPnjwpV64c6BLu37/fqFGjxMTEhg0bEpcvB3p6CTwieHQYhmnfvj1I8OvWrTty5Mj27dtrNBo4bD0CFHfgdXZ2Nhrp5AX+RAuaaPGtFxTwhtpstlu3bn333Xc5OTkxMTFt2rRp0KCBSqWCVwxaFIw2WkNKuaRRY88uuS8ttEZe5AKbID4/QOWi3PGU5BprIDs7+9ixY8uWLROJRDExMV27di1fvrzD4VCr1flt0lcT0EpBiEV7kGi1WpFIBOYnp9NZ3OW9zwkaQEB5O46Db7/99p133sEpwDxHOZG7gN9iGGbixIk//PBDmzZt6tSpU6dOnWHDhj148KBatWqw9F7CbPSSQX45714d619Q1YHuwBEjRrRq1apnz56E1W+E+W9roGKF2WwWCoVqtbpx48anTp3S6XQJCQmbN28OCgoCkVLBFHnFBMQU6CLJzyeM1Ub1A6iwL2c9iEeAFWUwGFBdyOfzb968efPmzSVLlgQGBn744YfR0dHQs0UiEeobaLYpmxcjF+x2u1fTKtWgPRKwMBCLR/wFUhoeTTC55+TkyOXyxMTElStXJicnf/LJJy1btgwMDAwKCmIzAHtBodPppFJpLmoGNgMOLB/a2caD9DHoS4bXSLeYNGmS1WqdOXMmHSr0bLcz+OQFXHp4XHq9fvz48WPGjCGE9OzZc8OGDcgJoaeke3vUlBZ4daxnABZDTk5OYGAgwzAbN248c+bMokWLnqoWFJCf5C7QHZ6RkQFiEr1ev3r16rS0tB9++MFisQgEAjgwiCfaX4Aqs4BOPrSOlxCCzA9SYD7WKwiaqsV+bbPZUlJSTp06NWvWrLfeemvChAkikej111/HV9C2GV2V6FNl27IowMnVqMeL0oL8po99wgF2u/3GjRtbtmz5/vvvly5dGh4e3qJFCzA0isViSAbG1afPbDYXQL/yyoKaqaDIoSQ4CLt7nE8cOhZ1Xl6/fv2TTz45ffo05TDL1fy3uAE5Q5NukTK7e/futWvXbt++XavVIlJBNa1XDV4d6xnAfoOHwGAwdOnSZdGiRRUqVPDz84MagazzkgzSoUKNLRzVavWgQYOWLFkSEhLCuLLo/g36FvOZiiobtlL1zB+lvmV8rIQlwksO8EQLhUK9Xg9XH+hnKe9aRkaG3W5ftWrV999/P2LEiPfffz80NLRGjRr0Cg5XK1mkQdCp8WpXpREoqWFrUYwrvwr+S7p9Ll++/Oeff65fvz4iImLIkCF169b18/MzGo1sQUFdIOx8YS8Aah9CkSIshxYVVoi8E8/tJpxHDocDLWsIIQkJCR06dIiNjaVsXvhkiclVuqhwSlLjv3fv3p9//nmzZs0IIQzD6HS6XPwOrwjy07G8e+9fwASUyWRWq/XcuXONGjV67bXXKGkkIYTL5cL/yZRUxQ0yBhwOB1phYJCdOnWaO3cucZm27O54xQpkobHvnf70UwEWAwQW8XlPdXR/OUG7q8pkMpvNhocDBUupVDIMU6FChYoVK06cOFGr1fbu3fvo0aMgjk9OTjaZTEajEdEidC7DpZBu6dn78qJooBFep9NJN45EIqEKllarXbt2bXR09OTJk2vVqnXlypWffvqpefPmoPylugJ6rSCviLaO8G49NlDCjIeGmmh0ibHZbLR5F/wOJdSG0ZsAACAASURBVNDYKj+g3yuHw0FPszt37jx58iQmJiY7OxtF6LRpD9j8i3s8TqcTzMmEEOh8ULBUKtWoUaN+/vlnQohOp+NwOH5+fh58bi8hvH6sfwFTBvzps2bNatOmTdWqVVFUr1arqTeLlJTdQLOY8buZmZlyuVwikSiVyvfff//kyZMF9HkoVkAq0WbGzxwDkgaeymn+KgMtJqHWUzWUBqZhMoL9FQ1WhULhjRs3dDrdqlWrLl68OGTIEPSy0Gg0CB0SQkCIjER4b0pW6cJTp8xqtSoUijt37qxbt+769evdunUbOHBg+fLlUfTg4+PDTlrIysoqV65crqvBS+r1Y7FBQ4QQTXiNRwThhupOjztjqFdbrVYfOXIkLS1tyJAh8HnTMwgSo8Ryc6HYwZDj8/lIrYEP9e7du+Hh4SUwhpcW+fmxvMfev4BlEBwc7HA4NmzYkJCQQLP2sNAtFgs6sZeM95jatQhvly9fHq+Dg4NlMtmjR4+qVKmCT5ZM/JtuY3iznuchIF1XLBZTq6sE8thKC6BLiUQiqFNIxYX7gc/ni8ViTKtEIqFlpK+99ppAIIiMjLx///5ff/3Vo0cPmUw2ePDgunXrQrqx+Y08e3deFBY8Hg+ZQHBP6nS6CxcuXLx4cfz48RMnTvzqq6+qVKmCSA2OVaFQiBWCPWU0GqmCRQhBMxMsJLbc94IQwufzaWI7tiF8fogbgB8LO8izPExwFzkcDrlcfvny5ejoaKlUCnGBPHeHw4GCmJJJMKeCCMmjhJDAwEBUNY0ZMwZMjZDzXgOPDa8f6/9htVpp/IVhGI/TDeRNt0cEYeLEiUOHDg0LC8OZSlUZT43Ti5IBVFvajU6tVms0mt27d2/cuLFBgwa9e/du3bq1Xq+Xy+XsIn8AOVvI8MB+R92ZQCCAl9Hr6nAv2HKVuELqONeRio70doSE2A//8uXLJ06c+Pzzz+Pi4hISEpo0acKmacCpXwJ1ZF54HFCn4JyOiIg4cOBAaGhoXvuWYRg0MfTUOC0Wy6lTp9LT0wcMGAD9wVM1756FNx/rGUCd879NHAlhGMazChbi34QQUBHiTRAQI9uJy+XqdDq86VWwyjyQI08IQQ2/RqPx9/evWrXqsGHDEhMTO3fufOnSJYFAsGzZssTERHSndzgcRqMRX0eRFNQpq9VqtVqRcY90Lu+B7XZQnyLiesj+gf0GGiE0Poc2bLPZUlNTly1b1qRJk0mTJtWuXdtkMq1evbpOnTq04wKCyzhKvflVZR7wq6F+nGGYa9euhYaGms1mqOYwkJB/yeFwPKhgWa1WkUj06NEjcLCxw69eAF7Z+i/gHEaKKKWi8WDuHvunaXwNgjsnJwf6HzIGFAqFR0boRUkC2hLtVgunvVqths7UtWvXsWPHqtXq2rVrr127tkaNGrNmzUpKSoLwRX9x5GkRV7q9zWaja8x7ZrsdVG0F0xIiO0is5vP5cCgSQlQq1cGDB/v16zdo0CAej7djx449e/Z06NABWdjoCkcIQQMcZGTn5OR4baoyD6FQqFQqkWtlt9ujo6PVajWICXFIwWryeKsxVEY7nU6pVAoCUsoo5AXgjRX+P2i2UNu2bbdt2xYQEOBB+x4GCtvpilgDqo0gZ202GwbspUV4pcC4uoBjq2LPIjsH/zIMc+TIkUWLFhmNxkGDBrVu3VoikQgEAkSmUP4DNd1gMPD5fC8xbJGRH0GdwWBgOxjYJcBcLvfx48dpaWkbNmw4depU165du3XrVqdOHRC10AYPuCwVyB5PXfCiJEHJqKBOzZo1q3379nXr1s11OpckJ3Z+cDqdX3zxxaBBg2rWrInM4Fcz79YbKywEBgwYsGnTJs/yDFFOZ/wXIpvD4Rw7dmzAgAESieTRo0dCoRCRRK+CVeaBvoTEpWojcVClUul0OiRAEEL0ej2aATudzmbNmv32229btmxJS0vr1KnTp59+euHCBZPJxOFwAgMDcVrDL+JVsF4ETzXZGYbx8fGRSqUWiwW90hHcsdvtly5dmjx58ttvv71t27aePXteuHBhypQpERERYrEYOc48Hs9sNmOWqRptNpsDAgJQ9abVar1+xzIPcHCgvZLD4WjcuPHq1auRjG+1Wi0WC6SBx5s6QAs8f/48VbAcDscrqGAVAK8f61/Qamez2WwwGPr3779mzZqQkBBPjYddlIv2GggQjBkzpn///mihSFxFha+m3fBKgT3F4KGmuhE7A5pSWVLHCUJUmZmZq1evnj59+vjx4zt16vTaa6+hLye+xeY+9aJQyJWDDHZ1ZFyxWw6npqZu3759165dYWFh/fv3r127dtWqVemcGo1GJN8wDMPOrdHr9Twej16HcnmU7C164RnQ+gZCiNls7tChw+LFi+vUqYO/snualWTXwryYMWMGIeSrr756GZxqHoTXj/VcQL5eUFBQZGTkzp07DQaDp0bCjkEwDAMFa8+ePSkpKXXq1MnJyUGtLESzV8Eq8xAKhTt27BgyZAiHw2nSpMmZM2fOnDlz7do1QgiXyzWZTHBvgClDq9UizATLQSaTvfbaa9999x0yt1auXNmoUaPJkyc/evQI9LZeBavIyFvkBT8iFKNHjx4tXry4a9euw4YN8/Pz271797p166Kjo6tWrUoIQU8SVL9zuVyJRAK9GToxIUQmky1atIjL5Uql0uHDh1ut1gMHDhCXA9KLsg2pVKpUKpF9ZbPZRo4cuXr1anYVC/Wh0nBHySM5Ofn06dPDhw8nJUKFXSrhcDjoVKHChXlVYTKZYEqazWaFQtG6des7d+54dkgWi8XhcGBUDx48aN26dVJSEp0jtIj36AC9KCFcvnyZy+Vu376dYRitVjtp0iRCyJMnTxiGMRgMDMOgQg3do+maYRgGafIAwzD49+HDh3v37h0yZEizZs0WLlx47949D91WWQNi9yaT6c8//4yLi2vatOkPP/yQnp7OMIzVajWbzYxrvlQqFcMwJpOJfhEvMKfIdJ4wYcInn3xit9u1Wu3ly5fr168/bdo0zKAXZRsmkwlrA1XAmPT27dufO3dOoVDgM+yDwFPjjI+P37lzp06noyeRVqv11GA8C2xhOinE1ebOq2P9C/jhGYZxOBx6vZ5hmLNnz0ZEROTk5GD12Gw2unqoQHQ4HJCGRf5dxH0YhmEfiniBUxMS2Ww29+nTZ926dUX+IS9KNZYsWUIIyc7Opu8kJCTcvn27sNehSxd48ODBkiVL+vTp07Vr102bNkF8089QaWCz2djaAF2iRqORXgq7prQDmw7AxnQ6ndiD6HHEMIxarcYL3D59Pycn5+LFi9OnTyeEzJ49+/z583iS9HkWbBFBjGAANOlq/vz59APnzp379ddfMRgvyjxQi0pfG41GlUoFogQwOLCPIYZhrFYre8MW7VTCkqanIQXd3WwBsmDBgilTpmD9P1UmvFLIT8fyxgr/BY1tg8ydEPL6668nJCR8/vnnarUaH/D19dVoNDabjV2YDdobQgjOnsL+Lp/Ph1RFab3D4aCBgCdPnhBCpFKpw+FYsGDBG2+80aNHDzfdrhelDKByHjt2rNls1mq1arU6Pj6+CLw4NKyAF1WrVo2Pj1+yZMm0adNSUlIaN248bty4o0ePKpVKQgifzwclD+NKvkY4gMamkRsEgVI2Ci/AqoCNzOFwzGYzLQ8MCgqyWCxKpRLEGUajUSKRmM1muVz+zz//fPvtt1WqVJkzZ06rVq2cTueAAQMiIiIQ5aEcGQXEdKhchjDx8fFB6f7UqVNXrVqFzzRo0IDP56NIvpgfgxcvF8BjJxKJTp482aVLF7CxgMmWuALWYNIyGAzoGM3j8YqwTiQSidFoBHkefdNisWB308XscDhmzpx59OjRgQMHghmLHluvZj53QfD6sShsNlsuVdRkMv36668JCQnXr19nWCq8wWAwm80Wi8VisdBvFRn4OjU7nE4n9Rno9Xqr1Tp+/PiPP/5YpVKxjWwvXimcOnUKxYA8Hm/GjBmHDx8u2nWoK8Viseh0urwr6sCBAxMnTvT3958xY8aRI0dyfUun08Gbi1R6lDjR9Y9IZakG+xaozQPHlcFgoBsTePz48cqVK9u3b9+tW7e9e/c+1XnAFg65vs4G/ROu4HA4DAbDlClTIJnr1au3f/9+6kt4ZcMxrxTYnir6jsViOXnyZJMmTRQKBdYq3cLstafT6VQqVRFOc51Oh4g2wzAajYZeNtfS/eWXXyIiIqgHF82I8PrFD8RSCm+s8BmAfKSLw2Qy4bXT6Zw3b17z5s0zMjJQ35eRkcH+Iq3KLtrvqtVquqZNJpPRaMTv6vV6hBH79+8/YcKEot+YF2UCNpvt1q1b7dq1w1YNDAycN29eEdzyuUIADMPY7Xa9Xp+VlcWw5HVWVtbhw4eHDh0aHh4+Z86c1NRUjUZDF2quixgMhjImVfOKQbrBbTZbVlbWnj17evToERkZuX79+mvXrjEMo1QqjUYjvqjT6YxGIygY6BUKDhSyc2vo43U6nd9//z0qEmQyWd++fa9cuZJ3Br0oq8hvzRw/frxmzZp///033bDYgGazOScn50WiyfTUowo9XW+IX+t0urlz5w4ZMgQnF7IP6TgdLhR5AKUXXh3rGcgluWC/4rXT6Txx4kT9+vVhquJNs9nM1qvQ2aBoTw/9B/Ca7Ve4fft2+fLl169fj0i8N9f1FYfdbjeZTNu2bWvSpAmiA0XwZtH896eKb1irsI+hIjgcjnXr1nXo0CEmJmbdunUKhQI5iAzDOJ1O5G/hWxaLhW6Z0gt6tFCFiXHloaempl64cKFPnz5RUVELFiw4ffo03a1sxxLbr4C8GZvN9jyKkc1mY5+O9NePHDnyxRdfYMYTEhK0Wm0ZeM5ePA9Qo5rrTYvFYrVaU1JS2rdvv2jRoocPH+b9Iq3fKhroLlAqlezr3Lt3LzY2dsKECbQ4jP0trVb7yiZjMV4d65mgK4k+ATio6Ps5OTkDBw785JNPrl+/jpogirxO3ecH9cHabDYap3j48OGHH37YsmXL9PR0+qZXx3qVsXTpUkg0m81mtVonTJgQFBQUHx9f5As6nU6z2Ww0GmmwD0sR/V7of41GIxZednb23LlzGzVq1Lt375MnTz5+/DjXBctGIJseG3jODMOoVKrLly9PmzbNz89vxIgRKSkp7Ox+pL/Q7xoMBvaZRCvCmHzOSzZQQYbXRqPx4cOHJ06coPrrxYsXW7RoQQhJSkpy29168dLDmQf0yEhPTx8xYsRnn3129OhRxuUXoGusaCXnRqMRIULGVW7FMIxOp0tLS1u4cGGjRo3++OMPvJlLVUB31Kf+6RWBV8d6BhCYQzSQPgRkRTAMo9VqEXves2dPq1atJk2a9PDhQ4VC8eIGZS7l7MmTJ/PmzSOE7Nu3D0FJum0QfXjBn/OilOKLL744c+YM/e/t27cJIdOmTSvsdfLThKgCxzAMqhfNZjPblUtVh8TExHnz5olEoi+//PLq1aswo81mc5kRHWq1GglYKSkpM2fO7N279wcffHD48GFIAOrf0mq12LwQETk5OYyrqApPg517wLhKw/L7UXi88BoZYFevXk1ISABHPGobFy5cSAi5f/9+Gch786JQyKWgw7EKnDhxom/fvu+///6+ffvwDrUBimaWY9fn5ORgnV+7du3QoUNNmjRZuHBhSkoKwzBqtRorEN6yvE7xV/Oc8upYzwZdKGzrk3EdOQzD6PV6SM/169e3atUqISFh/fr1d+7cYetkhf1RBIAyMjIOHz7cp0+fFi1abNiwgRaHw2RhV8t78Wpi8ODBMpns4MGDOMuXLVtGCElOTi7CpWAKox0H4yIlZ/6b3JrLU0s1M/phjUZz9uzZESNGiMXiH3744eLFi0yZkB5GozErK2vz5s09e/bs2rXrihUrcJ5BPaIuBHZQj6pH9K8mk4ktQHLlFeSHXGfV6dOnORzOxIkTs7KyrFarQqGIiYkZNWoUU1QXhRelCPlNMVXTqe2NQ+fo0aP9+vVr2rTp7NmzHz9+bDQai5aVRTNe7t69m5yc/M0330gkkvnz5ysUCryfmpqKT9JDimGpgMiZKcLvlgHkp2N5e+kUHTdv3ty7d29iYuKhQ4cGDx7crFkzq9UaGRmJBr0ymSwnJ8ff318oFCKDHkQPoNvX6XRKpVKr1Z45c+bvv/92OBxdunTp3LlzxYoVpVJpEWryvSjbmDt3bp8+fQ4fPvzxxx87HI7hw4cPGDCgYcOGHuw2aDAY0DHz4MGDmzZtslqtjRs3/uCDD8LDw9EGxG63GwwGuVyORnsQLEajUSQSMQzjdDpRH84wDFgS0MygaA3O2Q2FmDytRWhvdZhPUqmUcfXcpTXqer3+1KlTIEpo1KhRr169wsPDYaaXDC2F09X0nRCi1WofPHiQnp5erly5X3/9dfny5f7+/pMmTRo6dKi3d5YXeYGmaqmpqX/++efZs2d37twZHx9fqVKlsLAwqVTaoEEDjUYjEomw/oVCocViwXmEfWE2m3U6ncFgMBqNhw8fvnnz5l9//TVs2LDmzZu3a9eObh8vCkB+vXS8OlbRYTKZeDyeUCi8ceMGwzBKpTItLe3EiRPVq1e3Wq2ZmZlhYWGIJ6JLBoie0WcjMDAwJCTEz8+vcePGvr6+/v7+3kXsRQGgKohGowE/E1ur8CCo/L1161ZSUtKWLVusVmuPHj3at29fuXJl4mq7RgjJysoKCAhgr3P0SaStQgkhZrOZy+UWTYEAqRWUTjjeRCIRW3GhQ6UPU6/XczicM2fOXL58edy4cdOmTfvggw98fHzkcjnVq5Djj88XH6jtS+cUmfISiYTH44GWDH9iN6rzwgs2oGkRQm7duiUWi69evep0OpOSktCaSSwWy+VyjUaDTYeeBA6Hg8PhiMViqVQqEolq1qwZEhJSo0YNLpdbrlw5XFalUoE4xosC4NWx3A+j0cjn83EeWCwW0L4xruprkUjEPk5MJpNQKKRnCRWaSqUyODiYuChMqSkPzkkvvKBApiDOfqfTqdVqZTKZp3ZrdnZ2YGBgLncRXFBJSUk3btz48ccfIyIiYmJiYmNjGYaBfsAwDFg9+Xw+HTlqv6Eb4U29Xi+TyYo2MKRU8ng8uvt0Op1YLBYIBFS42Ww2Pp9//PjxrKysPn36JCQkfPzxx1WrVg0NDaV6GKjvMIySVGcRc8nVWJdxddtFYhxMu5IZjxelAjBUGFYneCwhauEA1LqgQJgPXRHJf3vPU43Ni+eBV8dyM6iChScLG9RqtVIz2uFwSCQSu92ek5NDDQIK5HAQQmgIg7Dos73wIhfY8S+z2czWUTwF6HyQwjAh2EaFXq9PSkr6+++/R40aNWvWrKioqMjISIlEQpUYnU4nEokQsMBXkHQoFovZnq3nBHYTdVnhHY1G4+Pjg22FLSkWi9PT0y9evDh//nypVDpu3LjatWuHhobCo6ZWq/38/PB1qDWEED6fn0uVLFYwrsQ4Ho8HpYrL5aL205tC4EUBgAGDTBU/Pz/Y+YQQWDUwGOC1ghNLIBDQXUbpc7HGYORgG8LXWza6OBQrvDpWcUGhUISEhED9hw2akZGBKAkhhC50Qgh8swzDUO8XgOY81OwghEgkkpIU616UCuj1+lzOUY/nSUBkUx8PrOTMzEx/f3/qkXU6nefOnVu6dKlGo2nVqlXfvn0rVapEO1YBer1eLBbz+XwwJhRNoMN9xePxrFarXq+Xy+XYU6BWsdvtJ0+enDt3rlqt7tu3b8eOHWvVqpWVlVWuXLlcjxH9SfBdiMQSeMi5niRxRQ+tVivb8YD6fIFA4LXHvCgYaIlDCJFIJDj+8zqxaGp2ru/iODMajUKh0KsPPCe8OpabodPpeDyeVCrFcrTb7SgiYD+9jIyMChUqIMk3r13OMAwySIRC4cuQWOPFSw6qoCP0hm5lhfX3uAtqtVoul9M8J0IIj8dju5E0Go2vry+Xy4URgq8cOnRo+/btVqu1e/fu0dHREolEJpPRaDusEbvdTiMXzw/6KNgJWMjzvXbt2ty5c5OTkz/66KPu3btXqFDBx8eHZtbjd9EayMfHh7YcZUcb8ybRFwfgOaPSmA0UgXoDN17kB7adgMRHhmHYijgWvFar9fPzg7KFREME7uneQXyGqmK54i1eFID8dCzv0V5E+Pr6wq2K1Yljhs/nZ2VlrVixAn1k16xZk5GRsX37dh6PByeWyWQymUyo5UamoVgspoFwlF8RV/auF15QoIgPwTXiSn+G+PMI/P39QdrEMIxAIIAIhkwhhKCSmcvlGgyGkJAQvV5vt9v9/f1jY2O3bt26dOnSrKwsMPreuHHj0qVLhBCRSKRQKMgLJD8xLhpPDodjt9tPnz79888/S6XS33///euvvz5z5szYsWOrVavGuDpYg3qKEKJQKHg8HoIjYhegy+LKBfRydiPgx2L+W7ev1+sJISKRCIkHqPwqgcF4UbqADYgC9qCgIHg6wTRECNm2bZtMJuNyuWPGjLl69eoff/xBCOHxeGKxmNr/6Dkok8nEYjEOI6QACgQCrwvgReB1WRURbN8+yn8IIU6nc/HixXfv3jWZTGKx+I8//ujdu3dsbCwhJFcnc2qz0kIq/BUnkwcL8r14OeHj44N8C5rv7HQ6PWhcwlyDKwhuHupXMxqNEokkMDAQ9Rw0c9zhcGDk/v7+Y8eOHTly5M2bN48ePTp16tRevXoNHTq0atWqxJX0XVi/EcwYgUCQkZGxcePG3bt3h4eH9+vXT6FQ+Pn5CQQCmlwlk8moGodc+5CQEFj2uAichRiA1WplZ4wVH+x2ey5HIEAfHUbF1vy88IICfix2WRUhhMPhCASCrVu3Ll68WKlUwn07evTomzdvdu/enWpO2HFYXdjCWPNwfUHT8tBtlQV4Y4VFBNvapmnvCoWiYsWKv/zyy+DBg5FccvPmzYMHD37xxRdeU8ALL/ICZU1JSUnr169ft27d8OHDu3XrFhkZib/CCocqSaOBT548CQ0NJa5zBe8/evToyJEju3fvZhime/fuMTExAQEBXh4pL7yIjY2VyWTr169HWFypVI4YMWLTpk30A9Rf4PH8zlKN/GKFXnWqiGDrTNBTCSFI7Bg5cqRYLI6Li7NYLDVq1KhWrZpXwfLCi6cCRXONGzeOiIgYNmyYQqHYsGFDfHz8oEGD2rZtW7duXeLSrqgxHRoaCjcwn89XqVTnzp3bsWPHjRs3evfuPWfOHJSbgG/aq2B54QWPx9uwYUP16tU///xzkUgUHBzcoUMHm82GqlUkY8GP61WwigNeP1bRQbNr6UMDAfeSJUsCAwN9fX0XLlzYpUsX8mJ8P154UbaRi8JHrVarVKqzZ8/u27fPZDLFxMS0b9++UqVKxMW/gBKTCxcunD9/fty4cd9+++2HH35YpUoVaFRsWxwZvh65KS+8eBlgNpuPHz/euXNnp9MplUpnzZrVpUuXsLAw4uJ6QJYLu/7di6LBW1fofsAUyOujGjt27C+//MLn83U6Xe/evX/66acqVap4ZIReePGSgypYJpMJAXda04R+Mrt27Tp06FDt2rXROyg9Pf3QoUPffPNNhw4dEhISIiMjfX19VSoVl8sF/T1Yuwghufi6vPDiFQR4RM+dO9e0aVNCiEAgsNlsx44da926NSHEbDaLRCKaAuiNFb4IvDqW+wG2aHaOKhqe+/r6Hjhw4NKlS998843D4RgwYMDy5cu9a9cLL/IDO7UcBIlCoZBmU2k0mpMnT27atGnTpk21atX69NNP/4+9946Potr//8/sbC/JZpMQCCDSVFARkSYgINIRQTDiRwXEdkVQseEVKSoX0Asq2KVYUEFRpCiIjXZRRERFpXcIIX37TtuZ+f3x+u75zQ0JV+KmwXn+wWPZzM6cqec97/J633TTTXgXLzMrGB9f7FHGYBBCYrGYyWQ6ceLE9u3bb7/9dkhe7dix4+KLLyaGNEfCbpm/B9NuSD6oAyKJEvpIJHLq1KkTJ07EYrH+/fs//vjjO3bs6Nix45IlS3bv3l3Tg2UwainQXKAPd6TfosNaNBr9+OOPb7nllpdffrl79+4HDx58+eWX8/LyBgwYMHv27C+//NJisRQXF5P/lj6h66xBbQsGozYQj8c//PBDu93esmXL4cOHHz58+LbbbotGozt37oxGo4QQnufR/I0kOlkxkguzsSoPooQoqyaJ3h2zZs1C7ENV1TZt2owZMwY9Cmp4rAxGbYXjOGQrQg4KWfDFxcUTJkzo27fvwYMHX3/99bVr1959993Nmzfv06fPnDlztm/f3rJly/fff3/IkCEff/zxsWPHCCEoPhcEAdIPTqeT1ZowznPC4fD7778fCoWKiopsNtuFF174f//3f4QQr9eLBCykvVNlO0bSYc+gvwsKXwkhuGSXLVs2adKkoqIiu91+4MCBzZs3T5gwoW3btjU9TAajNoICQEKIIAiEkN9///3FF190u92rVq267bbbvvvuu6eeeqpp06YICCqKQmWuhg4d+s4777z55ptOp/Omm2664447Vq1aBWkuvPbQhoMMxnkLx3G//PLL7bfffvToUUJIfn7+xo0bhw0bdtVVV5nNZjRQJ4RYrVbmCKgimI31t6BdTQghiqKkpqauXLny+uuvnz17NsdxV1xxxbXXXjtnzhxRFGt6pAxGMpEkCbZRGYxfwtChX+Jpjn8Rp4jFYoIgQDh0zpw5vXr1WrBgQatWrUKh0FNPPdW1a1djezWz2WyxWBDOQJ6WzWZr0KDBmDFjtm/fPn369H379rnd7meffXbLli2hUKiMnqeqquhIg//G43Gq3m4MMpYLm34YdZR4PP7vf/974cKFy5Yt4zjuggsuiMViCxcuTE9PJ4SYTCZ6izGh0SqC5bxXEmN/NNhYuq6jVlzXdUVRJEkymUwul4t1d2acq0Ah2vgNRHdoJ2Nj80Fo8EiSRIsH5QimggAAIABJREFUjx49umHDhjVr1oRCoVtvvXXEiBG0f+3ZjoTWJ65du3b9+vVvvvnmlClTBgwY0KZNm0AgYDKZIOJgFDUFaBdttVo5joPGSkWNchmMukhubi5E4xhVCqsrTDK06Akv62UkSY2VGtBCZG8JjHMPuIWMFz9qxfHZ2G+KFgCGw2GO4/7888+XX345Nzf3zjvvvOqqqy677DKO40RRpF3VK1GHa6wx3L9//65du9asWfPrr7/ef//9OTk5NptNlmWPx0MI8fv9aWlp9Id47uG3ZUxGZmkx6jS4H+HEdTqdaNlEr3ZGEmE2VpKhD3RN01RVpSIO1MBCb1p2KTPOSYwvEmWgraXQ6UySJLSVLSgo2Lt37759+/7xj39Mnz69e/fu11xzDVVBNPYoPMPKKwJOMvwbDAahlUUIKSgoWLly5VdffeVyuQYOHNi3b19ESTRNQwSfVqhgEjKukzm0GHUd3BGxWAx67uxVv+pgNlaSMcYKCSGYVJDhwXEcvZRxSNF6tgZHy2AkF3r94wrHA4V6oYgheBcMBnNzc9evX//QQw+NGTPmpptuGjBgAH4OdWncNQgyRqNRi8ViTMM6WxCmd7vd9FGG1LG8vLyPPvpo7dq1F1100ahRo3r27En/insT92+Zp18lulMzGLUHSZLMZnM8HqcTEOYj1mYq6TAbK8noCYxvBmUaRSOGyJ7RjHMPeqkriqLruvGRfezYsUaNGvE8n5+fv2zZslWrVtlstokTJ7Zp08bn88ViMbvdbjKZ8vLysrOzo9Goy+VKypBkWYZkHRVVEUURKzc+1n788cf169dPmzbtscceu/nmm7OystLS0mh8s4xRRftlJWWEDEb1Q8P3hYWFXq9XlmXW2K0qYDZW8jk9GYUQEo1GnU4nmhhShxbrBsU4x6B+LOMTA0/zWCz2448/zpo1KxwO33///ddee23jxo0hzYBnPc1lhIEVj8dlWaYxO+iRnm2QPRwOezweWEjBYNBsNtvtdtrslhCCShSLxYIHXygU+vHHHz/88MP8/PwBAwb069evYcOGXq8XdY50j5iNxajrhMNhnufLxMEZSYfZWFUCVR/F8cWsA3keHMbT65gYjHMJ6vjJz8/Pzc2dPn16JBLp06fPdddd16FDB0JIMBhMSUnhOI4qyXEcJwgCHvr0gSPLsq7rf/NVRNM0FAnifjQaTGWSq+iwCwsLt2zZsnTpUlVVhw4desMNN9hsNrvdTls4MBuLUXeh7xi4kkVRNJlM7IW/KmA2FoPBqJDTE4+MZXrwEsHtRL1QqPbQdf3XX39dsWLFSy+99PTTT/fp06dRo0b169cvd521BzwQaSoYIWTPnj3btm2bP39+amrqxIkTL7roooYNG5JEVoCiKJiZjAXFZcpc6F/ZU5TBON9gNhaDwTgTkOU0mUzldlk2mlxQvTp8+PD27dvnzJnTpUuXUaNGXXDBBRkZGbIs09wm+ttK1AlWNdRCEkWR53mqK2G1WnNzczds2PDwww8/+OCDAwcObNWqVVpaGsdxpaWl+CDLsqIoLpcrEonAG4cKSqrhUjmJLwaDUXdhNhaDwTgTRssACrqolsUDoaSkJD09XZKkw4cP//bbb0uWLBFF8Z577mnXrl2LFi1KS0u9Xq/RsDBql9RCm8NoMp5uAhYUFBBCDhw48NFHH33yyScPPvjggAED2rVrRwgpKSnx+XwIfdJfxWIxSZKguRUIBLxeb7XuDIPBqGmYjcVgMCokHA673W6YDuFwODU1lWZQmUym4uLijIyMzz///Msvv9y9e/fw4cNzcnJSU1PhsqJ6VGhKAxEHGGenR9NqCWWkg3Vdl2VZ0zQINhpHu3v37pKSkjfeeENRlE6dOg0bNqxZs2b4k6IocOkhjIg2D+S/hVgZDMb5ALOxGAzGWaBpmiRJDodjy5Yt69at+/e//z1lypROnTr17dsXpX8mk6mwsLBevXpYHoG2ctNpa6GNRQjRNA3eLNoLC49BqgJPPVJIOzt48OCWLVs2bNgQi8X69+8/ZMgQpJ3B0orH4xCRNyqgMhiM8wRmYzEYjApB/ZEsy4gJWq3WP/74Y9u2bffcc8+dd9556623duvWDfYTbTuI/wqCYDKZoGpYxpAqV9ykFoK6YDz38ACEgGpRUVFmZiZEWGjMNBqN5uXlLV++fMuWLV6vd/To0V27dnU6ndREo94sBoNx/sBsLAaDcSZoL/Mvv/xy9uzZTqdzypQpl19+eVZWFhYoLS31+Xx4lFCVBKP2m6IoZSwqqgha26DlgQBmFiSFMWD0hzY6pQKBgM1mczgctHhw8+bN33///fTp00eOHDl16lSLxVKvXj2z2Vxm5QwG45yH2VgMBqNCTp069euvv7766qt5eXnjx4/PyclxuVx4FIiiqKqqUY1dURSz2SxJEqR3YI6QCsQaFEWptb2kEOOz2Wxl8sbwGKR58ZIkEUKMuyBJkq7r2P1gMHjkyJEXX3zxwIEDt9xyS8+ePVu1asVsLAbjvILZWAxGOdDSfcyvsiwTQurQBKnruiiKNpsN3hfj/Qv9KovFgpsfeevG3smEkOLi4j179ixZsmT37t39+/cfOnToxRdfjHLC5EpbYYXIEMdQaVJXRT+JxWJWqxXd1tAygbbHoVYddT5VtBLqbyOGmGAS947mzuu6fvLkyR9++GHTpk379u274447rr322oYNG0qSxPN8GQ0LOP8EQeA4zm63a5oG1xee0YSQOvcQrsT5ZTDOJZiNxWBUiN/v93q9kD4idcrGokQiEfQBlCQJkSybzQZZZ6vViikc9W66rmuatnHjxk2bNr3++utjx44dOnRodnZ2WloaDC/MlBzHKYqSrEMBMwL2BK06PMPyiqJwHAd7KBQKIWCHp1gsFuN5Hl4lWvl45q3TrtUmk0kURVgzSdmvMqAqUxTFt99+e+PGjbFY7NVXX83KymrQoAEy6GEdhsNhTdNoFJLGW2n3xiS2cawezvb8MhjnGMzGYjDKAZOZUS2JECJJUl1pfyQIAqr/jGIBsVgsFAqlp6cju0jX9UgkYrPZrFbrnj171q5du3Tp0p49e6LdTVpamqqq1JaSJMlsNlPPEPLW/z4mkyk/P79+/fqnq6JX9JPS0tLU1FQ6Njya8HOjOMKZixaNm8CSGEYS9ws2a7nd33/55ZdNmzY98sgj999//6hRo6644gqz2Wx8wJaUlKSkpKiqiqZbHo9HkiRZlj0eD3yNyRpnVVOJ88tgnEswG4vBKAcaNoK2ZDgcrotFYbqu0+yon3766eWXX87MzJw9ezbu5UgkEo/HP/jgg6+++srlcg0YMKB379716tWzWCzUWAmHww6Hg7b2w9OBRtmSCLRJaXPPirjqqqvy8/Pz8vIIIYgSOhyOFi1aTJgw4c4770RvHyz5V1oKqqoK/0rSdwceMvoIhcoDIpuFhYVut9vpdPr9/iNHjixdunTu3LkzZ85s167dddddRxI23+bNm8eMGfPBBx906tSJxj2xkrooZ/oXzy+DcY5RkY31/x4KWAiFRTqDcT4RDofP8N9aTl5eHj7E4/FwOPzKK680adIEt7MgCEeOHFm2bNnYsWNbt269YsWKPXv2IFlb13XYHIFAQJKkSCRiXKeqqjBKaNzn73P8+HFd1+mGJEmKx+Nn/snChQsJIX369MF/v/rqK47jmjVrtnTpUjSQpsM781MrHo/TvcYAMJjkQrs3Avo5Foshs17X9aNHj27duvX222/v1KnT888/f/DgQV3X582bh/N17733CoKg63ooFNJ1HZ/rCpU4vwzGuQRuefosIgk1ZubHYjD+q44sGAxWpKVZCzGZTFC0OnHixCOPPLJ8+XLsCCFk7Nixb7311ltvvdWyZcsePXqUlJSg3R7HcVB+MsouEELi8TjEF4xfJjGmRv678NDYj7lcvv766379+nXq1Omrr75yOBw8z0+ePPm555675JJL9uzZQwz6W2fo1VNmK3QAydov6kKTZZk+S7EVq9VK8+FIQvwd6lkHDx78/vvvN27ceOjQoS1btrjd7mg0qmlaw4YNX3vttSFDhtBTU4diheQszy+DcS5RkR+LmVOM8xrkINNeyHUuOhOLxdLT099+++2JEyeWlJRYLBZFURBcC4fDgUAAudWqqqanp9NfZWZmQpbdKCiK1jd0UozH47Iso+dxUjh48GCLFi0ikYjVaqWtDM+8a4QQnueRG44Wil6vt02bNiTxRPufMqfYHbxcyrLsdrsxjGTlvFNjqIzaFj4geouYIGwORKKbN2/eokWLkSNHvvbaaz/++GM4HIZpcvLkyaFDh95zzz0vvPCCzWaDKZyUcVYDZ3t+GYzzAebHYpzv0CzdWCzmcrlSU1ODwWBND+qvAjuD53lUzhNCHA6HIAgQaEAVIbwpPM/DvwXS0tL8fr/T6YzFYvgJvocbhhDC8zxKEZM1VKvVipJAkmjJXKbUwIgsy999993w4cNbtmy5c+dOQsjatWsHDRp04403vvHGG2lpaSaTCU+qMydWl9mQqqpOpxPVo0nBYrHgyButvTLOJ/RANG7U5/OVlpaSxOnDCJHRT6MM0HdN1jirgbM6vwzGOUZFfiwmXsKoq4iiSAiBNDlJCEUCTHKRSKTMN3oi9dD4J57n8V/4bILBoMlkgr+BRs2cTmftiXpgJFQgCn44umuwinBwBEHAB/TmM67E7/eThK/IaEjR9aiq+jcNLOM4oYtBFT6h2nWGCRjhWkEQdu3ahafV4MGDlyxZsnz58szMTOhm4ZxardZoNBqNRumOSJKEhH1CCDZBN2S0dTCwv3laFUVBM2n8F0lsZZYRBKGMVQcDiyQuS5waJDBhSLqu0zEnZZxVgfH8Qt3e6AQ17gIhBCcIN1o4HCYJ5yIhRJZl3Lw4DrIs4wMOThKtfAaj+mEuK0adRJZl4/SJ5sR+vz8lJQV+WWQXbdiwIS8vz+VytWrVKiUlpUGDBshNRgYM9RxgnsBbuMlkUhQlFAoRQhRFcblcHMdFo1E6/9XI/qqqSh3M0K+CIBYhJBKJmEymMgvUtnFWwiUDK7Bbt24bNmxo1arV0aNHv/766//7v/9TFCUcDqenp2PlqN0z1hXCMuY4ThRFKjGPA2K1Wu12O9KG6sRxI3Xk/IqimJKSEg6H8TaCYlW0fTx27FiTJk0g94V8R7/fv3Xr1mg0KklSt27dGjVqBKNK13U0vkRERVEUiJLUObUwBoPC/FiMOgk0nEKhEN6P69WrRwjxer0oHV+3bl379u1vvPHGrVu3+ny+SCTy+uuvZ2dnv/nmm3/88QdU3S0Wi81mczqdLperpKREEASn00mVigghjRs31jQtGo2iWooQgqmuRnA4HBaLBRVqhJD69esTg9cE/8VfLRYLglO1apyVOMWKoqSmpoqiKMvyRx99JEnS0qVL58+fr2laenp6MBiED6+kpAQGVjwep1E5SCrY7XZM+TzP41iJoiiKIhxddeK41ZVxmkwmvJbwPC+KoiAIqqo6nc54PN6kSRNN0wRB+PXXX5cvX+7z+V566aU9e/Y0bNiwsLBw+PDh11577TvvvJOfn69pWiwWUxRFFEWz2QxDze/3MwOLUYdRmXYDo86C+nZU5sPVtGrVql69ej300EOFhYV6ogxeUZRoNBoIBD777LMbbrhh/PjxJ0+exA+p9gEwm80QXnI4HAcOHKDfi6JYjbtVDsFgUDVIA0QikfT0dIvFYrFY0tPTjeILqqoGg8GaGKOuVzxOmvGpJx4yxsXK5YsvviCEXHnllbquC4KANHBCyE8//RSLxWKxGF0SLRFxGaiqGggEqHBANBpFNJlu1JKgThy3OjROZPJFIpFAIKDrejwex+2paVppaemTTz7Zo0ePJUuWSJJUUlKiaZrf79d1PRqNhsPhmTNnWiyWVatW6QbxFLqbbFZi1H5Upt3AOJeg9VyIFuEdetasWXv37p07d26TJk1ovi3Vq6RilcieXrp06S233EIIQTk9IaSwsLB58+ZIGUHDE5IoucdG0XKnRvYXtyve70ki1Zr+Ffc2ISQej9vtdpq3XnvGie8xKjxkziC4QAhRVXXjxo29e/fu2LHjtm3b8OWgQYPWrl07YMCA5557rk2bNqqqIiCIXUb7nTJ6/adrcNSt41aHxulwOGKxGOJ6KAKwWq0FBQWHDh267777Jk6cmJOTA0/Y/8sFNpmMJygQCIwbN87pdC5YsMB40zEYdQKm884416AFZUj7GDlyZPv27e+++25d181ms91uLygo8Pl8dNKF1QU1Kb/fP2XKFELI1KlT69WrV1pa6vF4yjUICCG6rsOSS26b5L8J2vkRQ6507QTjpM4P8tdsrKuvvvrHH3/ked7lcoVCoZUrVw4ZMuTw4cMtWrTQdd3n882ePfvWW29FxhIuAEII2mNzHJefn5+amoqySphfakJAAclbeBrW/uNG6sj5RegQ1ZGEkIKCgqysrCNHjqxfv/7DDz/84IMPsrOz6U9wAcA4FgRB0zSHwxEIBHw+31NPPfXjjz/Onz+/efPmWLhcQ5nBqG0wG4txDkLlrIYNG9arV6/x48cb/0pdWaIoKori8XiQU2K1WouLi30+37x5877//vvnn3++efPmsNg4jqtXrx66oJSUlCD9BSvRDfqK1Q9yxjHZ4CalvZAhd07vXOOStWec1KNO/pqNdTolJSVutzsejyM7B6dDluVgMJiZmakbxEV1XX/ggQcuuOCCSy+9tE+fPrDDgK7rqNqDaELtP26kjpxfaK3puh4MBnmed7vdp06dmj17dnp6+rhx43CTxuNxURSdTicOPs3Sw35B181kMq1du3bq1KkrVqxIT09HimS5vSAZjFoFs7EY5xR4Y+Y47vjx41OmTOnRo8edd95JDJ4MxCyMEzn9CbXMZFnetGnT/fff//PPP0MBAWEOLE9LF1HGRRUHamR/6V5gcsKO0L/izjWZTMbFamCUFY/zbGOFhBBoAcD5JAgCaj8BLTSjSqQcx6mGkjdsFMUNbdu27dWrV9++fTt06OD1em02G1VSqP3HDdShccI7dfDgwYEDBz722GP33nsvHFE0/EdVf3mep3H80tJSn8+Hs1lUVLRz586JEydu375dEAQU9tbIzjIYfx1mYzHqJGpCIJQ+00niaiaExOPxWbNmWSyWhx56iAaGzmr9uq7/9NNP48eP//LLL71er8fjEUXR5XKhvgn1aKf3CUkK1M1GoyFoZbN58+YjR47s3bu3a9euQ4YMKWNPlLmHSSKRnFTWP5R0Khqn2WyWJInjOIT2MNcmKwrGcZzf709LS9MTeu40OKgnakJ5nq9Xr54gCKWlpZWw+aqaGjm/9CJUFAWFmTCGJEnKzc1du3btkSNHrrzyyuHDh+u6fobrEK5BURR5nj916lS/fv2WLVvWqlWryt0yuq7/61//Onr06KJFi0giK8A4N9WsU5nBOB2mQcqok9DphOp643MkEpFl+dtvv927d+8DDzwAA6sSEzbHcZ06dZoxY8agQYMOHTqEzGLoQVQ1qqrKsgzHW2lpKWapd999d968eampqaNHjz5y5EivXr2g11UN46lSkLXt9XoRjeU4rkWLFlarlU8SHMc1bdrU5/NBPxYiWMhkojFEmBFU/5NBCLFYLMXFxdFoFJIQsizDzHrnnXfuvvtuRVH++c9/lpaWtm/fXhAEKNZWtB78m5eXl5OT89lnnzVq1KgSzjYUrxBCpkyZkpKSsmjRIgT3kWRJEtK4WHNdaebIOK9h2g2MWg7UCE//PhAI9O7d+z//+Q8SQU6dOlW59aPCfNu2bZ06dSKEoDseVEx1g8QA0mKShbEaHwITuq5/+umnd911Fz5LklRcXPzSSy/NnTsX35RbG1wJTYSqptxxIk6kG85mfn5+0hUxcCpLSkrgF6QS84QQi8WSkZExePBgQkhWVlZdOW7VOc5oNIprfvfu3V26dMENFYvF4vH4F1988cQTT5xhnJhBjh071rRp0927d0uSBAWHSgDZ99LS0hMnTlgsloKCAuRT4q9oo2kcCYNRG2DaDYy6Cowb2oaFEGI2m3Vdnz17Ns/zjz76KFJACCHBYBAW0l+HpokEg0FMLVi/qqpVHSukySiEEAg4DRw48K233mrUqBFt3VNSUpKRkYE7tq7HCuk4jZX5qqomZaM4XLquQ0uW4zi32x2JRDIyMjp27Dhw4MABAwY0adIE6djRaJTFCo0grgqNMUEQJk6cOGrUqK5duyKfHaG6Sy+9dP369VlZWRXFCt1ud7t27aZPn96tW7dgMJiWllaJ+7EMK1euXLJkybJly7A54z2Iw1Kzp4zBoLBYIaOugkwaQghEd/BUDYfDn3322ahRo/REfRlaI5/tyjHZFxYWpqamXn311WlpaVyiuXKVgjpHPaGfabPZjh071qBBg1atWqH+kRASDoftdvvcuXM3bdpU1eOpanRdv+CCC+C7cjgcCAlJkpSsWCEhJBKJcByHls8TJ0585plnSkpKioqK1qxZM27cuGbNmvGJzt81fCxqGZIk0XaBDodDUZRdu3Z1796d53mkWOFqHDNmzBmuQ7vdHolEHnvsse7du5tMprS0tGg0WgkDq7S0FCcoEAgQQoYOHRqNRvfv3w8DSxAE3JtQhGcGFqP2w1xWjFoNzXkn/52StWjRosceeywzM5MQYrPZTi89+4ug6Qpa8ZBEjzyn01nVnWjD4bDP58O7jslk8vv9S5cuHT16NEkUXkWjUYfDYTabMzIyKtHsrxZy/Pjx+vXrK4qiKEpKSgq8j0n0Y0GKNiUlxWq1zpw5Mx6P22y2QCDgdDqho4ZWlfCJslQeQH2KKLwoLS195513Bg0apOs6GhHCQywIwtVXX71+/fqK1oNsd4RiSeIarkRUxOfzEUJkWfZ6vZFIxGazPffcc2+++eaLL77IcRwt7E26U5nBqCLYewCjVqMnkgUR1MaH4uLiRx55ZMCAAYSQgoICkmjejM9nhdlsRuc7rARfVoOrw+fznThxghDi9/uDweDjjz8+c+bMpk2bxuNxqgKF+alBgwa//PJLVY+nqqH6+NBTIITY7fZAIJAsPxYy2WkIMhaL2Ww2ZNnDwIJ1JQgC9XsxiKHHuc1mC4fD8+fPf/LJJzt06MBxnMfjgQZKLBZzOBxWq5VqXpQLTquu68XFxW63OxgMVi7t5Pjx4zhlZrPZYrFkZ2f/9NNPR48eJYRYLBY8BHieZzktjDoBs7EYtRoaDqAOj1gsdurUqenTp7tcrmg0mpWVpeu6xWJRFCUrK6sSm/D5fDCqEIaonm45siw3btwYVt3AgQPbtGljMpm8Xq8oilggHo8jJ6xZs2bwrtVpEPohhAiCYLfbUeLn9XqTlXDq8/mCwSCq20pKSjweTzgcpvJm6LrD87zdbrdYLGe2Fc4rrFYrTs2ff/75xBNPpKSkjB49ukWLFqqqwgvlcrmcTqckSQ0aNNizZ09F66GaqDDOCCGpqamVqIeNx+MXXHCBoii0i0N6enpOTs6OHTvQ24qi6zo7j4zaD7OxGNUK0o+M/xVF0dgtxBg8kiTJZDKh3MlqtaqqGo1GPR7PBx98MGjQoFAoBMEepPVUTiAUCfUIMlosFp7nA4HA/wxDoKchHTPK2crFuKfEUG2O6f+HH37Izs6eNWvWFVdc4XQ6MzIyaIQFalImkwkBr7Pdr9oGPaQOhwNqrrCeuSRBEgWhhJD09HRCCK0nQD4fjjzkyM+NMBMsDGhSkMQFVtF1iN2PRCL0dQKRU0KI1+vdv3//yJEj+/Xr16FDh927d0Nio0y4vFGjRsuXLz/zYACfELQz9o78i8A7ZTabqeBqJBIZOHDg0qVLYTdTxxvP8xDNIoRQ80uWZWZ4MWoVzMZiVCuYXBVFkSRJVVWLxWK326EWDckrhCdCoRASRGRZRtQsGo1Chz0ajS5evDg7OxsGFknYK8Rg9FQpfr8fNllRURFyt9HmtlwwJFEUMUiasC8IwrJlyyZOnPjHH39kZGT06dOncePGFotF13VYV+ishzr2+vXrV8N+MeoQoijCwqDdCFRVLSoqqug6hAXmdrudTmdBQQGMGNgr69aty8nJWbhwod1u79ix48GDB5EjCCM1GAySamzdA7OPS/QRIoS4XK569ept2LChsLAQ4Uv4enmeFwQBNhnKRKLRqNVqhfeLwaglMBuLUQPAT4NHvCRJp06dIoS43W6r1QolkZSUFJvNFgqF6GsrdAgJIXl5ec2aNcvKyjK+0eJDNfgn4vE4xMQ1TatXrx7sPL/fr1YAfkUTvJA25PF4pk2btmHDhlWrVm3fvv3ee+9dtGjR9ddfT7dis9nwgu5wOOrVq/fZZ59V9X4x6hZ2ux2i+chJR356ZmZmRdchutnAOqHRcEEQFi1aNHv27LVr1/I8P3ny5JMnT/r9flrZKstyNTdGpLcMx3HwvXEcl5qaessttwiCANea1WrFDYUqSGRhogqSSpgyGLUEZmMxqhWUyHEcRw2jcDicn5/PcRx0Ezwej8vl2rdvnyiKKSkpPM9DctBisZjN5lAoVFxcfPvtt5OEnJvRd1UNtdyapkWj0dLSUmjNwz2QlpZWUS42x3HxeBzZQoQQn8+3Z8+e++67z+12v/HGG0uWLFm3bt2CBQuuu+46KmONaYa+jlsslv3791f1fjHqHLjAcMG43W7IflZ0HWqa5nA47HY79DwJIaWlpU888cTWrVu/++67LVu2PPDAAytWrHA6nVdeeSX8qYSQeDxOleeI4ZqsOjiD/JVRH7tnz56bNm1yOByxWEzTNKfT2bNnT47jXC5Xq1atUlNT8QDp3bv3hg0bymRuMRg1CLOxGNUKfUzLsgxXTXp6etOmTYuKigKBgMViuffee3Vdv+iii/BwRz9mvNHyPJ+SklJYWEj7ptFcnDLJT1UHsoDT09MDgYDb7aZehDNMvPrBAAAgAElEQVT4sXRdVxQF2UKHDh0aOXJk9+7dp02bNmfOnB07dnz44YetWrWKx+O5ubnYKVif+G1xcXFeXt6tt95aDbvGqEMgaQ93EM/z6Mjk9Xorug5hVwUCAbvd7na79+3bd+2111555ZVvvfXW9OnTP/300/nz5zdo0ODYsWOdO3dGY1CqEoeVE0NQvuowvibhFQWvUjzPQx9V0zRkXK1du3bdunWKorRp0+bgwYO6rq9cuXL79u39+/f/9ttvq3qcDMZfhNlYjGrFGM7TEw0HUGCP2sAmTZoIgoD0LCQnURsL/544cSIjI4Mk3uPLrK0axi9JErLECCG7du0ihLjd7or8B8Qwbaxbt653794ff/xx3759Z8yY8fvvv3/44Ydw7AWDwRYtWhCDDcolOsCYTKbGjRtXw64x6hDwekajURhY8DMdO3asouvQZrPpuo4o4bZt23Jyct58882bbrpp0aJFR48efeGFF1q2bMnzfHZ2NhqTE0JMJpPT6SQJbxYSnqp6v0wmE5Lxac47crMuvfTS/fv3o9s3RkUShcCiKGZmZuq63r9//6lTp8qyPGfOnKoeJ4PxF2ESI4yagRoTeKR6PB68atPUJSiLGlVwNE0TBKGoqGjgwIFl1gaHVjV0guI4jtbEEUIuvfTSw4cPN2/evKJUMExstFZxx44daWlpzzzzDMdx8+bNg1RjIBDIysqCXBYhJBaLOZ1Ok8kUi8VSU1NLSkoqofvFOLdBoyc4R1VVzcrKOnny5KhRo/7zn/+UuzzeQBArJISsWbOmVatWa9as2bZt26JFixRFMZvNBQUFhw8fhpnFcRy9lVDhceTIkZSUlKreLziuytxNJpMJ9S7QP4NkV2ZmpiAIHo8nEongGWKxWGKxmNvthjQxg1EbYDYWo7qhD0T8Fy+sNInVarVS9w+faBdIK7pRc4ecDGTFYlV4KJexyaoCNaE7H4vF7Ha73++/8cYbf/7553bt2pW7PFrHEEJkWV6xYsUdd9zRvXv35s2bT5gwgRgUuXbt2tW4cWOk0uM1XZZlp9OpadquXbsuueSSKt0pRp2DKpWUlJSkp6fn5+dPmzbtoYce2rhxY7nLo7QQN93XX3/dr1+/4cOHm83md999F99DXo7juJMnT2IxxLjhBhMEQVGUMWPGVMOumUwmqjeB2x8OtrVr106ePFnTNJfL5XK5YrEY1ByysrJQKLN27dq5c+cOHDjw1VdfrYZxMhh/BWZjMaobGFVIqkC1oCzLUImk/chIIhMLUQNqPEFymsYRyoQLk1gDBccYzUShphXyQmw2G0Y7c+bMxx577KqrrqpoPdTmE0VxxIgRa9as+fPPP1955RV6KOB7y87O3rRpU5s2bXRd/+WXX2w2WywW8/v9ZrN5165ds2fPxsLwN5BECyAEVpK1y4xaiNGpg9NNmzRDK87r9UqSNHHixCuvvHLYsGFnWBW9dPv06bNw4cJZs2YdPHhQkiSLxYK+RgUFBVlZWYsWLbr55ptlWd63b184HJYkKRQKOZ3OQ4cOIS8QtyS6UyuKAkXZ5F6H2GXj3e31erdv3258g0LGWEpKyvr167E8x3GrVq3q06cPsiRJImsNH2j+PoNRnTAbi1GtIJWb53lj4hF0sE73WkGs0u/3Q0OLEALJ6WrQaMBIrFYrtoVUdLzx22w2tNGdNWvWqlWrevXqtWTJEtrFpQw8zxcUFDRp0iQWi4XD4RtvvHHs2LFvv/324MGDfT4fz/Mmkwmdkn/77bc33njjm2++GT169D333PPdd9/NnDlzzJgxx44dCwaDmGup84/67c4NOU1GReAKwV0DrxL8OpIkQb+K5/mXXnrpu+++69Wr1+LFi6nyahnwkgAhOlEU8/Pzo9Ho5MmT77rrrqZNm8I9nJWVdfjw4b59+/773/8OBoOKorzwwgt5eXmjRo267bbbDh8+3LJlS0KIxWKJx+N4z0FAv3quQ7xWwb4MBoOpqamapoVCoV69eq1Zs6Zfv36bN2/+4IMPBg8ejGg7IcRsNuM1CS7wqh4hg3E6LOedUa2gVghzhqZp4XA4FAohEwtCDHh2o8UsHos0D6OkpIQQAh2H6hktx3HwseF92mKxILl4165dd91114EDB+bOnVtSUoJ+PuUSiUSaNWuWm5vL87zH45Ekae7cudFo9Jprrvnqq6+QHFO/fv1HH330xRdfXLduXX5+fr169VavXv3xxx8LgjBjxozly5dPmTIFyvLGgTHr6nwAITNqYePKRw67JEnHjh2bNGnS+vXrX3311QYNGtSrV+8MOu/w5eCy6d2797PPPnvRRRcNHz78iy++gAkSDAYXLlw4cODA1atXb9iwYfDgwXl5eS+//PLRo0c/+OCDtWvXDhs2rLCwEGF6KmRViYY5lQAPBJJ4u6DVxB6Pp7S01G63z5gxgxCyfPnyhQsXOp1OmtZJEwlQn1gNQ2UwjDDTnlEDwEtEtaQJIZg/aNCB6hds2rTJ6XS2bNkyPT0dPVJCoVC1mRf0gY4PwWAwPz9/3rx506ZNe+aZZ0aOHIkgZlFRUZMmTcpdA/JwL7744sLCwvT09KNHj3bs2PHUqVMtWrS4++67R48e/a9//UsQhNWrV7/33nuEkMzMzD59+uTk5MyePVvTtPr166elpXXu3DkYDDocDrPZDJ8BCxSeJ+AU0+A4IUTX9WAwePjw4c2bNz/88MNjx4595ZVXJEmKx+PoN1DueuCpQpI4LsU+ffrE4/Hs7OyJEydu2LBh1qxZqamps2bNgk3GcdywYcPatWv3/PPPN2nSRJbl7OzsqVOnHjx4EJV9GA+sFni2qvQ42O12nudlWY7FYl6vF24qvKShWLJNmzZvvPHGuHHjxo8ff9FFF3Xr1g0/RL4B2lKxWCGj+mE2FqO6QYIq4oD4pri4OBgM2mw2RVFOnDiBcICiKH/88cf8+fNnz56dnp4eiUQwhXi93mrzY5FE/aOu64FA4OjRo2vXrs3NzX3qqaecTufs2bN9Ph/6CVYkf2qz2QoKCjIyMuBLcLvdixcvbtCgQSQSufHGG3menzNnzqOPPhqPx9G60W637969u1mzZrt376Y16ocPH662/WXUNhAfxGdFUZCo9/7773Mc989//tNkMr322msejycajcIEL3clqCYJBAKZmZlwHlssFlzbffv2jUQi8+bNe/DBB6+55prffvutTZs2sVhMFMW2bdt+9913jz/+OPo8Ll++fNq0afSOoK2iquGdRxAEPBasVivku/Lz871eL5TqEDq89957V61atW7duieeeOKTTz7JyMhAvoHdbsethPurqofKYPwXRi1dFJJU5G1mMP4+9AKDBqmu6wUFBVu3bk1LSyOE2O32tLQ0mFA2m83pdJrN5p9//llPSLrruv7UU0/t2LEjWeNRVRWxGD1RRUUSjitslG5XVdX8/Hz8V1GUwsJCXdc1TUOyVEVA5aukpAT/pdvCCrG2eDz+3nvvjRo16osvvnj//fe7dOmSn59/ww03PPfcc5988skdd9zx6KOPhkKhMsOm6kEkIcGKY4s7ugbBAOiJJgZHIL6p5nHWtvFURLnjJAlPDBYAuORisRiWpFfgGXqT4wosLi6WJEnXdRTk6obbKhwOy7L8888/d+/efeXKlfCqFhQU/OMf/3jsscfWrl375JNPjh49mm5CFEUMCXNHNRxPQgjd00AgYOw95fP5li9fruv67t27oWeRlpb25ptv0t9KkqQoCt1ZBiPplHv/6rrO4WFtjFiz3EBG1VHuq2QgENB1HWaWcRlBEKgijtPpRHbUhAkTJk2a1KhRo6SMxygWT7UWkfsCrVFyWgsRqGYb9wKWU7nrj8fjqEwMBoNut1uWZbPZDCMS4lgkkYi2efPmP//80+v1duvWrXnz5pFIZM2aNaWlpW3atGnbti123zgGNDzBbawnOp+g2qsaegqdAQyAPklO1+Kv5nHWtvGc1Th1XUchIa0IMf4kFApBs4omnle0C3h5UBRFURTqHyWECIJgs9mMv9q6devevXutVmuvXr3S09NPnDhx6NChI0eOcBx377330qFieJqmWa1WbLpKjyeOAyKSqqoab0maLkmPBiEkHA4jD6G4uNhms1VUCsBgJIty719N05iNxagZqF2C5yPsKthSeJ7SRydsEU3TRFF0Op1PPPHEHXfc0apVq2QNo1wbi04YAFMRPtPli4uLMzIy/mcAgoY+YVpheWqoIQ3ZqEBBwVuR2Ww2hopQGhaPx9E/mzrGSO22FQizsSo1Tl3XzWazLMt4wUDVrfGhTRI3CH0hKReslr5e06uRECIIAux+rAH3Hb37CCFwuCLLHiOkG0J5IzRLq/R4YqMYAHbZ5XLRHYE3Ggmdoih6PB5d13FrY3n6fbLGw2CUoSIbi5lTjJqhzPMXZgoa1NAQCf4E94/VasX7t6IoSdTBqghUTlHngcPh0E+Tn0ZLn/+Z4YF3bjqlYXmq/XOGVxqO44zv65hFcA9jvmQ57+c8sKdx4VFVqjIWObyhZzCwSOIyoxewMaOR/hAfqA4cXYBeq7jv8BnXIXIoqyEfC5lV9KGBXaY7QkdISy/LvLfQ7xmMaoZpNzBqNTAjkOgqSRJeqZlsAYNxXhEOhxs2bEgSlYwIr1eU4M9g1B6YjcWo1dCacMTFeJ5HOKNmR8VgMKqT4uLirl27koSIMWEScYw6ArOxGLUaBMvwYEWowul0FhcX1/CwGAxGNSJJUvPmzUkixwChz5rNn2Mw/grsGmXUdlAzRRI+rczMTGZjMRjnFaiGIYTwPI/uCEbVIQaj1sJsLEatBpndeG1FLV7btm3R0IbBYJwn5ObmZmVlEUJo98bTa1AYjFoIs7EYtRraB5cQYrPZLBZL06ZNN27cWNPjYjAY1cdvv/3WunVrfDbWPzIYtRxmYzFqNVCBIoTQym1FUT788MOaHheDwag+/vjjD7RNVBQFzwGaQsBg1GaYjcWo1SD3AjKJgiAQQho3bnz99dcfPXo0FotB2YF+wMJnBbQc8bCmCuxlhN0Zfx9EdpBRR+XL8afS0lK6mCzLZdRfGXUC4ymDeJskSZVblaIo+CAIAu7owsLC3NzcRo0aWa1WGh+kkqoMRm2G2ViMWo1RORCmD8dxPXr0+Oabb6AITwhxOp3xeLxMh42/CG01RQgJhUL0y+SMnmEA0qmEEJ7n4/G4JEmBQIAQ4nA4NE3DGbRYLKwmvy5is9lw1gRBEARBkiSHwxGNRiuxKovFgqIWi8UCwd6dO3f26NEDfzVq9LOWJIzaD7OxGLUaY1sPmoHRt2/fr776KhgM8jxfVFSEPwmCUIlabkmSIKeu67rL5YJ1hTaFjOSiKAo9sBaLxWazeb1eQgjt30IMM2gNjpNRCdBqGu85aHpDEm0bzgq87WRkZMB1rSiKpmnvvvvu0KFDCSFUUx5ddJK9EwxG8mE2FqO2gx4y9LOiKC1atFAUpbCwsLi4ODMzE+4Qt9tNBUv/OrQTTiAQ4HkeJtf/bI/DOFvQzITneUSRRFFElWgkEiGEoAkjDdqyPJs6B9qoU2GF1NRU6hU+K3ie9/v9JGFnWyyWAwcOlJaWdujQARLEzMfJqFswG4tR26HNcWmvaI7j/vGPf8ybNw8dA81mM7wglejfR8MZaWlpeHXWdf3Mrd8YlUBRFDgeOI6z2Wx2u53neYvFQhvPobk1U5Wso/A8ryjKpk2bXC5XNBqVJCklJaVy/kiv14vu70jU++yzz+688074tEwmE03XQ+vGJO8Gg5Fs2BONUauhKjg0TQdOpn79+u3cufPYsWOCILjdbk3TKpePRcMZubm5t912m9lsTk9Px5s0I+moqoq+k5IkIQeLEHLq1ClCiCzL1EtRCX8ko2YxmUw8z/fs2XPIkCHhcLjSvaKNtRE+n2/37t3bt28fNGgQMUSQaYdsdp0waj/MxmLUaowJOvS1VdO0YDA4e/bsCRMmmM3meDweiUQQhzrb9SO0MWPGjCuvvHLFihWqqpaUlGRmZiZ1JxiEEGIymaDEAZ0zTdNEUXz77bfnz5+P1DqSOMUsHlTnkGVZVVW327169ers7OynnnoKOgtnux6YTW63G/fy0qVLr732WviVjRnuyPdiXk9G7Yddo4w6AB7WsKUIIZqm+Xy+zp07N23adOHChWazOSUlRdO0SszNu3btatKkyaxZs6AdryiKz+dDHj0juXAcJ0lSbm7u8uXLx44de/nll3s8nrvuuutf//rXiRMnsAxMXjSmZNQtbDZb+/btCSEul2vevHmtW7f+z3/+c7YrQeZlNBrlOG7NmjUHDhx44IEHcF/DxsJn2FisrpBR++FUVYVEECFE1/V4PM4uXEadIBaLEUK6dOny0UcfXXLJJaSC9hqaplHxUpPJhKtd1/WTJ09Omzbt7bffTk1NhR/FbDZLkuR0OmOxGAqXeJ6nP6/x3h0YCb1DafCUZqjgT3TAtQFJkux2Ow64zWbzeDylpaWapiFrB0WjHo9n48aN7dq1kyTJZrPR8eODpmmyLFMp2rN1VXIcF4lE3G53LBaD2IfNZoOzpLYdt7pyfssdp81mE0Vx7969bdq0QfWow+EQBGHw4MELFy5MS0uD8YSzIAgCXFNYlaIo8GtivwKBgNfrPXnyZKNGjY4cOdKoUSOW6s6o/ZR7X2iaVluexQzG2eJ0Op1O58KFCwcPHrxr1y5qA0F7SZZlSZJQ+00IkWUZdU8cx4mi+OKLL3bv3v3tt98mhASDQbvdrqrqiBEjXC4XTDdG5cD8ilyreDwOByRMBFVVQ6EQTkc0GnW5XKqqpqWlhcPhHj16cByXmpqK8jRITeJkpaSk+Hw+fMaXZ4XVam3RogXHcfXq1eM4zuv1KorCtCGSDhyQF1988aOPPkoI4ThOEASbzfb5559feOGFr776KhLYcfZhYEmSZDKZRFG0WCzhcBi67fF43Ov1njp1asiQIcuWLbvwwgtRc1qjO8dgVB5mYzHqJNB89/v97du3X7hw4cMPPxyLxaLRqCiKZrMZwSbk/ZjN5kgkYrVaFUWB/aQoiiAIRUVFeNYjxzYtLW3GjBnRaLRRo0bsmV5pkFaFDGi4Bumf4vF4/fr16X9h6Pj9/saNG3/77beqqoqiGA6HS0tLY7EY1YYNhULQ8YdtpJ0l4XB4586duq4HAoF4PI6IMKsbrQocDoeu608++aTP56P9rwghgiD8/PPPWMbtdouiWFJSEo1GcZPa7fZ4PO7xePAKhPt66tSpDz74YE5OTjAYRPJlze0Wg/G3YDYWo05Cu0TLsty1a9frrrvu8ssvVxQFVYd5eXlWqzUSiWCWdbvd0WjUYrE4nU4EpyZPnnz77bdLkoRMeVVVn376aZfL5fV6c3NzmZ+j0lCdWLidMHEiRJuZmXns2LHdu3fPmzdv2LBhsIBdLteJEydSU1M1TUOJaGpqKsxirAcOEo7jECZTzxKbzZaVlUUSzcWN62QkEYvFIggC5Hxfe+01juPS09ORiNKzZ88PP/zQ5/NFo1HEU9LT010uF3zMmqaZzeaioiL812q1Dh069LLLLrv11ltxYZhMptTU1JrePwajkrB8LEadhEYGRVFEOsgPP/xw9913v/fee5dffjkVEUWKD83/UBRFVdVIJPLNN98sWLCgVatW7777biwWa9q06aFDh2hmCZRIWT5WJcBRomJmiqIYBTWgOwoHRigUCgQC33zzzZYtW9544w273S6Kot1ux1xL096NPYBhtJ3VeHABhEKhlJQUURQDgUCnTp2OHz9OB0lqx3EjdeT8VjROpJ7ouo4o8IABA9atW0cIueGGG0RRbNu27fPPP0/V/EtLS+12u9PpROoV8rFUVd27d++oUaMmTpw4YsQIbAudeXw+Xw3uL4PxV6goH4uoqkrf2nVdh0Oewaj9xOPxkpISXdeht6Tr+v79+1u3bj1z5sxjx44Fg0Fd16PRqK7roijquh6JRHRd/+233wYOHPjwww/DX/XCCy/Y7fatW7diGUIIz/MulwsBRF3XcYPouo4ppAbBSOgdSk6bhvEnOuAaAVtHMhaGhGeL2Wz2eDx0mYKCAl3X0bKwtLSU/lwURbqDfr8fJ0U37HX8LNF1XRAEXddlWUbDaYynth03vY6cX72CcRJC3G53KBTSdV2SpO3bt6empt5zzz2aphUVFc2aNeuyyy779NNPY7EYzgJuT3iag8Hg8ePH33rrrc6dO3/11VdYQywWwzJ64i5mMGoz5d4Xuq4zPxajThIMBmkEAXVnhBD4q15//fXJkydPmTLliiuu6NChA+QY9uzZ8+uvvy5evNjlcj377LOXXnqpqqqyLDscjvXr1/fo0YPn+UAgkJaWhnUyP9bfwXis4KLQdd1qtZ46dQqZNzhTdLFQKGS1WuF9RBKP2Wymp9Xv99tsNtR7Op3OsxoJPCWEEDjJCCHIfKfzN6k1x62unN9yx4kR6rpOz9oXX3zRt29ffI5EIoIgTJo06fDhw3fddddVV13VsmXLYDCYlpb2008/7d+//8EHH5w0aVJOTk6TJk2MFw+s5LN1XjIY1U9FfixmYzHqMNFo1Ol04uqlAUFJkkpLS/fv379+/fpnn30WE8C4cePat2/fuXPnZs2aWa1WBI8QfkJ/HrRa8/l8sL2YjfU3h0qnXthY+F43JLohiofoIayfkpKS9PR0BAexj1arVZZlu90+ffr0xx57jFRKu4HO07hCuETbgNp23OrK+a1onHa7HdYw7iYaGUQoELfP8ePHN2zY8Oeff86ZMwfiDpMmTerQoUPPnj1RVSpJEqpK6Q+pccxg1GaYjcVglA+d7ElC1dBisVRCovqswCSUkpISCoUgVY+tV3V7EOwaFaBKS0vz+/3Yrs1mgx+CEOJ0OkVRTG4/OLfbXVJSQtOzkLhT0cK6rr/77rt33nknIeTgwYMXXHAB7N14PG6326ERD+Ut8r904Y0bkmU5PT2dJtRXNW63mza9rp7GL7iiSOICg3oFivWQzYbzW2bhpHBW55fBOMdg+lgMRjmIooj0amgNEEKQdl3V20V5I2Y4l8vl8XhItehWK4qSmZmJei5CiN/vr1+/fjweN5lMEBXzeDwOhwNyCUn028HasFqt1KflcDiM830Z4F+kShCA53lUkppMJqfTCXfjmQ1BhIPxGR0tIUmarP2qCLRjQosnl8sFeYJq0K/HFQUz1O12q6oK753FYoGgK3oZwfpJroF1VueXwThPYDYW47zGbrcjwZZ+Yzabq1k/KRgMxmIxFL5V9bY8Hk9RUZHdbqf+hvz8fIvFQsMx4XBYkiSHw5GSkpLEGI3RdYQCBbQvPMNPYFcRg/mLV0M6i+ODsWrndCAQr+s6uq+cPpgqoqioCHazqqrRaNRut0ejUeMYqo7MzExa5EEI8Xg8CMhC4VPTNBjTqampSbT5KnF+GYzzARYWZJzvUH8JbAvMFlWdemW328PhcGpqqp4ofINToaq3Gw6HU1JSwuGwruuZmZnozIg5GMalqqqxWAw180kcj67rHo8nHA57PB4cbWMmVrlUtGk4SKxWK9ZDVa/KBTLicIARQjCAcDhc1cfZ5/OVlJQQQurVq1dYWAhlClIt1xUuYLvdbrFYIpFIOBymf4Xdg+JNCHvW4PllMM4HmI3FOK9BMhZJpMyfPHkSpYXJTUU6HeTUW63WQ4cO3XzzzdOnTx84cGA1bHf+/Pljx44lhLhcrqKiosGDB7/22muNGzfGpmH2EUMBQbLGYzKZgsGgx+NBgx273U69UxVBc73xAdYA6gQhBI/5+8wrwTK6rsPY8ng8VPI0KftVESUlJQgXlpaWPvnkk61bt37ooYdI1Z/f/Pz8hg0bIp1cFMXWrVsvWLCgS5cu+KuxGhfU4PllMM4HWM4747wGtUv4AH0BxFmqJ1x49OjRMWPGPPvss9dcc40gCGiWXNUb7d69+5YtW3Rdt9lsyM555JFHJkyYgAmSEEKL10pLS5Mr/0gtWkIIksDOUCK3ePHiu+66Kx6PHzly5IILLjAuCcF3WtJ/hqcWVRI/fQBVDQzBSZMmNWrU6PHHHyfVVZo6d+7ciRMnYq8lSUpLS+vSpcuLL7540UUX0WVCoZDb7UbNZhI3fVbnl8E4l2A57wxGOUBZIBKJmM1mVJgjZFYpFbqzQJKkkydPjhgxYtKkSddcc00oFHI4HC6Xq6q3Gw6Hn376aViQaA8nSdLUqVNvuOGGAwcOmM1ms9msJ9KbfD5fsraLyCOsIsiBms3mvzgBU3UAcPDgQZPJZDSw9IrzsSADgS3SASA4W6WgH/l9993Xvn37xx9/nHZErurtBgKBe++9t1GjRqqqwuLx+/1r1qzp06fPmjVraN6hw+GAeZ2s7f6d88tgnMOwe4BxXqNpGs/zkE2CPI/ZbFYUhatiNE27+eabZ8+e3adPH0JISkoKSfg5qhSPx9OrV69hw4bRDCHMjr169WrWrBkhhPYRIoQk8Tg4HI5QKIRJl2ZDn7l+Ez88/XytWbOGJJpSkIQkQUUrwTLIfMemYc4ma78qQtf1b7/9VtO0kSNHQgUXW6/q7Xq93ng8Pn/+fGO+ud1u9/l8rVq1QjFmPB7nOI4YZMz+PpU4vwzG+QCzsRjnNbRyDf9F6CS5utK0Qh6ROPgSbrrppieffLJjx474E7wOXLVonMZisffffz8tLU3TNIvFUr9+/ZkzZ06cOJHn+WAwiEOBiE9yjwPsSGKQqDjz+iORiKIobrf7yJEjHMfBInzhhRf0RJEgfo60/Yqgm6AbpcNILsXFxfig6zoh5NNPP/3yyy8XLFhACKFZ53p19Rrv3bt37969qepb48aNV65c2axZMwzAbDbj++Reb2d7fhmM8wFmYzEYVQvavyBOHw6HnU7njBkzOnfufMUVV9jtdkEQZFlG4Vs1vPfruu50OouKip555pmMjAxFUerXr48yflrV7Q4AABS7SURBVF3XjQnRqqpSlYTqh+O48ePHE0IikciQIUPgKXE4HBMnToSYuCRJqIyrX79+NWhenAHUD2ZkZNCR7969+5NPPpk8eTJyz+HEkiSpGmJn8Xg8JSVFUZTFixdLkhSPx7t16+ZyuRo2bBgOh2OxmNVqDYfD1ZaUxmCc5zAbi8GoWpDhhPnVbrf//vvv27Ztu/nmm7OzswkhNpsNgRVJkqrhvR+ejMzMzJEjRzZq1Oidd9759ddfP/roo40bNyJjiaY3cRxXg/k0eqKTt57oH6zrOrLIp02bRgix2WwejwdpTzUoEBCJRNLT0wkh4XDY7/criiKK4qOPPtq/f/8LL7wwHA4j9S0ajcKKrerxwEy3WCw+n+/dd99t1qzZ4sWLhw4d+vzzz7vdblRU0B6CLF+KwahqWF0hg1G10HsKLY3HjBkzePDgYcOGGf8ky7KiKNVQVAgFI3wuLS1NS0uLRCK///77ggULXnrpJa/XS3sK0X5zNYUoijzPm0wmURRxZHC4RFGMx+PQao/H41SetKbGWVxcDA13pNN9/vnnW7dunTlzpqqqGDmMV6vVqldXy0tcaX6/32KxwLPVsWPH1atXZ2dnoykkIQQtO6thMAzG+QCrK2QwagCj/pDFYoG7aNiwYTQzBrFCq9XqcrmqITbncrkQWZMkCV14PR7P1VdfnZGRsX79eo7jaDlYtSUPlUsgEICKJs/zTqeTaqKqqmq3291uNxxdPM9zHFezQ83IyKCev7y8vBtuuOGhhx6Kx+OqqrpcLjQ5tlqtKCConiFhQykpKW632+v1chz3xBNPvPHGG4IgQP4Nhlf1DIbBOJ9hNhaDUYXouk49VRaLZcGCBePGjSOEUMvA4XBYrVZU9VeD38hkMtnt9oKCApvNxvM87Sg3ZsyYxYsXE0Lg2Kb/VvV4KsLr9RJCRFGEJjuSscxmMzpDh0IhjuPQEzoej9dgXzxZltGdmhAiSdK2bdvmz5+flZUliiJCcih0gAJq9diC0WjU4XBomhaNRgkhsVhMVdWcnJzff//92LFjaEwJCbRqGAyDcZ7DbCwGowqBRwEhrWPHjgmC0KVLF3Q7gTYVFkMLuWoYD/LEES0iiZZ/JpPp0ksv1TRt9+7dGAaGXeM63TabDTFBxN3Qbs9kMqWkpGiaVlRUBMO0BvOxrFYrChfi8bjNZpszZ86IESMKCgrcbnc8HkesUFVVHM9qsFmxRUmSVFVFB0yn04mtDxo0aOPGjSRxWquhaSODwWA2FoNRhWB6Q0vEHTt2QM8ddoPT6UQSj6IoDofDZrNVg5mVmpqK1oSBQIAktCow919//fV79+5FxZnJZKpB5xCgDf4g42S32xHHFEVRFEWTyZSZmclxHJXGqMFBYmx+v5/nebPZnJWVJcsyXIYkkfAuimI1pLra7XYqgo8wJZLAdF2/6aabli1bRgiB/jtNy2MwGFUHs7EYjCoE5guCRMePH+/QoUOZpBzOIKFp1I2sImKxGGZ6BOOoCUgIueSSS3bt2gXTStd1zNZVPZ4zgFAghmesgLPb7UbHVZkGfNUMThnO8vHjx7t27YpvqOopSQhHVZuzzWQyQS4EvZJQHyrLss/n27BhA5aB2crChQxGVcNsLAajCoF1xfO8qqqlpaUej6dm63YROYLzDKE3uD0kSWrcuDFVy8QyrLb/L4JzSlOd/H5/TY+oLHCkEUIQnoYhyGwsBqOqYc9QBqMKocnFmqZBZdTYELCmQMNghN7Q0Q+a71arFQ0cSSJZm3FmYIZClQoOtlAohNq92obNZnO5XCaTiVZX1KDgBYNxnsBsLAajCqHJzhaLpU2bNjt27CA1nW5slMSjOByOLVu2XHLJJTTPvcYNwToE1Uo4ceJEzTr/INZ6+vfhcFiW5bZt21osFuq+qvGaBgbjnIfZWAxG1WIymeDnaNu27fr161VVRbpMTQFNKUKIpmmoZ8SsvGLFivbt25OExcC0iP8KSF+jKW6nTp1yu9012NunXANL13WPx7Ns2bKcnBzCQoQMRjXCbCwGowpBxA0Og4yMDE3Tdu7cWYP+AzoHQyQTnXw4jvvzzz8jkciFF16Iv0IioaYGWYeg4meEEJvNNnz48OnTp9eglgTUpel/dV1Hvh0hZOnSpUOHDtV1XVVVXJbMVclgVDXsMcpgVCFGS4Xn+ZEjR37++efFxcU1NR4q0UTFUQkhsVhsxYoVI0aMgHBDzXYqrFvASUmNlXvuuWfdunVHjx6tqfGUCQHDxtI0beXKla1bt27SpAnC1uz8MhjVA7vTGIwqBLVmNH/8hhtu2Lx58x9//FFT46FzMHV4FBcX//HHH99///3o0aNhgVWbYOa5gSzLNptNVVVBEMxm8/jx46dPn17Tg/r/gZk1Y8aMCRMmoJkaukDG43GW885gVDXMxmIwqhDIfyMDBiHCd999t1+/fvv27SOEQDTh9J/ANYIe7ZXLnqFZVhToNfA8T+Uk8H0sFuvcufOCBQsikQg8W6qqaprGEqL/CoqiILsO6qOEkEGDBrlcrldeeYUQgsAcIUSWZbRLUhTFaLz+/WgdDfwBejnpui5JkslkKi4ufvzxx6dOnWqxWDBCLGM2m2tcZpbBOOdhNhaDUeVAjkhRlGg02rhx45UrV953330HDx7keR5S77quU1Els9lssVion6lyZhayrARBoLM4xE6h+i3LMv7Nz88fOnToL7/84vP5XC5X8vb4fMFisdCGiTB3UlJSHnrooeXLl3/xxRccx/E8X1xcjPBcPB5Hl+skDgC2He3LBIP+1KlTkiQh8rts2TKXy9WlS5esrCz6E1xR1aB5y2Cc57DSIQajyoGhY7fb4XIYOHCg0+nMyclZsWLFhRdeCLXPtLQ0LQFJ+Jk4jqvcROj3+71er8PhIISIoqjrOlX9Rg9jXdfD4XC3bt2ee+65Zs2alTGwWBTpr0Nl6K1WazgcTk1Nbd68+aefftqzZ8+UlJRu3bqlpqZyHBcMBj0eTzwex8WA9kqVPs6KokDpStd1i8WCE60oisVikWW5QYMGhJBYLDZz5kxJkiZPngw1fPTNhCIuS8liMKoBziiWg9gEq9lmMJIFpj1N02RZRrmZIAg8z8fj8eLi4sGDB7/11ludO3cmhEQiEYfDUa6TA86ns9puuT8RBAGZQ/F4PC8v78Ybb5w+fXrfvn0xQxNCYOGV6V3DOAMwWOl/cQDhsJQkKScn5+67777zzjsJIZFIBH0qy4B4YqWfusYKUF3XQ6FQamqq3+9PTU0dN26c3W6fPXs23GnIC6QtoitxUTEYjIrAnUgtKAQiNE1jT1IGowqBbDqcFoIgEELsdrvFYnE6nQ0aNHjxxRfHjx//+eefE0KsVivP84qioMsNXYOmaZXQW+I4jsawCCGxWExVVYfDYTKZLBbLmjVrcnJyXnvttSFDhjgcDkVRNE2DmoPRwGK1/f8TahuFQiE8ZM1ms9VqzcjIaNiw4aeffrpp06Zx48adPHnS7XYjdKuqqizLsiwjCsxxXOUMLOR1mUwmTdNKSkrC4TDHcej5LQhC//79GzduPGvWLJLwiULn3WhXQbaNwWBUHcyPxWBULdFoFJE4SZJo2XwgEEBX5sOHD7/wwguRSOTZZ59NT0+nrg6YR0jMqsRGg8EgDQ+RhCkgSVJubu6rr74aiURmzpyZmZkpSVJJSUl2djZJWFR0c6qqMhGH/wmidSaTSVVVmDLwXAaDQbQq4jju66+/fvrpp6dPnz5o0CCj96jMAT8rRFFE6hWKKvAlqhbeeuut1atXT548+dprryWEhMNhj8dDhwdPG64H5spiMJJFRX4sZmMxGFULSsloRInGDRVFkWXZ5XIFg8Ht27dPnDhx7Nix/fv3p+nnqET7OynSxhjlvn37fv755+eee+6ll17q3bu3KIqKong8HkIInYApqqoaBbQYZyAWixmF+2HTGBc4dOhQamrqqFGjeJ5//vnn09LSMjMz6bFVVVVV1Uqnn9Nzl5ubu2fPnvHjxz/yyCM33XSTy+Wy2+0IUMIUg71utLkZDEayYDYWg1EDUCcWfAZ0RqRlXyAQCHg8nuXLl8+dO7d79+7XX39927ZtqU+rcqrrgiAg0WrLli0ffPBBaWlp69at//nPf0aj0fT0dLoYBB0QciKEIJMaXzL5hr8CUrJisRghxGhs4dTjjONc/PDDD08++eRll13Wo0ePDh06NG7c+O88bGkC3+7du997770DBw5kZWU9++yzmqahhFCWZavVWtEjnUn5MxhJhNlYDEYdQBCEPXv2LFmy5IUXXrjvvvuuvPLKzp07cxzXpEkTjuPgIMHcSQwGHP2GEFJcXFxUVFRUVJSbm7t69erVq1ePHz++f//+Xbt2NVp1jJpi7969K1euXL9+Pcdx3bp169atW3Z2tsPhyMzMdDgcmqbFYjGY1/CQIXmLGtxIvSoqKiotLd2yZcvGjRvdbveIESMGDx5sNJ0ZDEZ1wmwsBqMOQJ1PgUDAZDJt2LBh586d0Wh006ZNe/fuDQaDp//EbDa7XK5wOKxpms1m69ixY7t27VJSUjp37ty6dWtCCLoQiqIoy3JKSko17xHDCLK1CCGqqhYWFh49evSXX37ZtWtXQUHBt99+GwqFUlJSNE2LRCKEELfbjQ8ul0sURSS5X3jhhVddddUll1xy8cUXX3rppY0bN87MzMTKS0tLfT5fze0cg3H+wmwsBqPOgJweFANqmobsdUEQqFS30XEFkEGFG9sYA5Jl2WQyybLscDhYgnMtAclwPM8bY4uSJNH0O9hhsixD47Si2kMUhOIzc1IyGDUIs7EYjDoDvFmwmaC3ZOzja8ykgaYltCHoz+ndLssyncWj0aimaWXSsRk1Cx65hBBN0yDNT07L1SPlnXFoQFCbm7AiQQajRqnIxmLmFINRi4hGo3a7/f9r725Cm1i7AI4/85XvjwqSQjcuXIm4UHQjoQXRXVHQhQsLiitBcKcLdelGLLoT3Ej8WCguXFRooQhCQMFVKRjIxq1EY2ybjJnMTGbexeEO5fb9uNzOeydT/r9VJiTllDbJyXme5xxZLrRtu1QqScMqeQEHQSCfrEop13W3trb2798vT5T5dNL12zRNz/Oi4ods62FUziSQnljReQLZEf+nveeSYAVB0Ov1KpVKNFJJumlsb3kakXqnLBb/I78HgL+EOhYwWaKzhztP9Uev0GhmczRAOpvNRqWsPx0ZkwP8u+wnjv+H6C8lObGs6kY9Nf7TU2zbtixLejHssrsHgFiwVgikw/amVpGoCVP0CpWm4dFSYBAEco+c55eqhuM40tud8SmTZmcD0u1dymQ1cDQaSfd/pZSs/Oq6vrOOJX3U5Db5FpAIZukAKTAcDqPPUdu21R8DT3K5XLPZnJ+fz2az9Xr9+fPnX79+/fbtmxzsl29KxWKxVCrJMEQZ6iKX0nlyNBqRYCUu2qIuq3tyW6Ynff/+/fbt25qm5fP5RqPR6XSazaZhGGEYytt3oVCQfww5YBj9BMmhZShhQr8WgH+PHAuYINF4ZqWU7KCS3VetVmt2dvbatWvj8Xh1dbXdbh89erRcLmcymajUEdlZijZNk506kyBawzVNM/qrWZblOM6NGzemp6c7nc5wOKxUKjMzM5JLyRri9sVfeeL2jVyM8QYmE8uCQAqsra1ls9kjR44opfL5/K1bt0ajUafTqdVqSYeGGLRaraWlpYWFhVqt5jjOhQsXlpaW/sYscAATha8+QArIYt+jR4/a7bZSqlAoXL58mUYMe0an08nn8w8ePFhdXdU07efPn3Nzc9u7ZwFII/a8Aymwvr5+4sSJ8Xi8b9++K1euXL16tVarMTtlz/B9/9y5c81ms9/vX79+/eLFi/V6/devX/RtB1KBPe9Aih06dOjTp08nT57sdruLi4vHjh1bXl6ONk1jD3j69On8/LxS6smTJ7Ozs/fv35f+/gDSizoWkALS+7vX6y0vL7969erdu3dKqWazWa/Xkw4N8ZD33s+fPz9+/Pjly5fj8fjevXt37txJOi4A/xt1LCDFnj17Nh6Pp6amLl269OLFi4cPH8qdSceFeLx+/fr379+u6x4+fLjRaHz48EEpdffu3aTjArAr5FhACrRarY8fP8r5/HK5fPbsWaXUzMxM0nEhHoZhvH//PgzDYrE4GAzq9frx48fn5uaSjgvArpBjASkwHA5PnTr15s0bpZRhGCsrK5qmLSwsJB0X4pHL5c6fP99oNBzHKZVKKysra2trN2/eTDouALtCjgWkwIEDB378+KFpWiaT0TTty5cv7XZ7eno66bgQj16vt76+fvDgwdOnT2uatri4+Pbt2zNnziQdF4BdYc87kBr9fl96YnmeJ/3fsZdEI6Jd190+CxzAhGPPO5Bum5ub5XK53+8PBgPLsgaDQdIRITaO4/R6PXmPDoIgk8n0+316cwBpRx0LSIGogiXkOxPVrD1mY2NjampKKeU4jox/BpAK1LGAFJMEq9vtep4XhqFUOKhz7BndblcpJQmW53nypde27YTDArA71LGAFPB9X9M0wzCiS9/3KXXsJRsbG9Vq1fM82Yll23axWEw6KAB/CXUsIMXCMDQMw3EcuXRdlwRrjymVSnJuVC6jfBpAelHHAgAA+PuoYwEAAPxzyLEAAADiR44FAAAQP3IsAACA+JFjAQAAxI8cCwAAIH7kWAAAAPEjxwIAAIgfORYAAED8yLEAAADiR44FAAAQP3IsAACA+JFjAQAAxI8cCwAAIH7kWAAAAPHTdV33PE8plc1mXdfVNE0pNRqNkg4MAAAgBSRrMk1TKeX7fqVSCcNQKaWFYRgEgWEYSim5S2y/DQAAgJ1c1zVNU/Io3/dN09Q0rVgsDgYDzXGcbDabyWQ8zwvDcHNzs1AoyCOSDhsAACAdJMFSSmmapuv6eDw2s9lsEASe51Wr1eFwWK1W1R9JWdLRAgAATDTf9y3L0jRNEifbtqvV6tbWllJKs227UChYluX7fqlUGgwGSild11krBAAA+O8sy3JdN7qUVMowDN/3VRiG4/FYHqTrei6XKxQKyYUKAACQMrqu67ou+6wqlYrUqv4FWTUa5YkajVsAAAAASUVORK5CYII=)Fig : Simple problem solving agent
###Code
class vacuumAgent(SimpleProblemSolvingAgentProgram):
def update_state(self, state, percept):
return percept
def formulate_goal(self, state):
goal = [state7, state8]
return goal
def formulate_problem(self, state, goal):
problem = state
return problem
def search(self, problem):
seq = ["None"]
if problem == state1:
seq = ["Suck", "Right", "Suck"]
elif problem == state2:
seq = ["Suck", "Left", "Suck"]
elif problem == state3:
seq = ["Right", "Suck"]
elif problem == state4:
seq = ["Suck"]
elif problem == state5:
seq = ["Suck"]
elif problem == state6:
seq = ["Left", "Suck"]
return seq
###Output
_____no_output_____
###Markdown
Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output:
###Code
state1 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state2 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state3 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state4 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state5 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state6 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state7 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
state8 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
a = vacuumAgent(state1)
print(a(state6))
print(a(state1))
print(a(state3))
###Output
Left
Suck
Right
###Markdown
**Task 1** **[5%]** 1) Print the output of the robot at every state, considering that is its current state and explain the output logic.2) From each current state, describe where would it move next.
###Code
print(a(state1))
print(a(state2))
print(a(state3))
print(a(state4))
print(a(state5))
print(a(state6))
print(a(state7))
print(a(state8))
###Output
Suck
Suck
Left
Suck
Suck
Left
Suck
None
###Markdown
SEARCHING ALGORITHMS VISUALIZATIONIn this section, we have visualizations of the following searching algorithms:1. Breadth First Tree Search2. Depth First Tree Search3. Breadth First Search4. Depth First Graph Search5. Uniform Cost Search6. Depth Limited Search7. Iterative Deepening SearchUseful reference to to know more about uninformed search: https://www.geeksforgeeks.org/breadth-first-search-or-bfs-for-a-graph/https://medium.com/nothingaholic/depth-first-search-vs-breadth-first-search-in-python-81521caa8f44https://algodaily.com/lessons/dfs-vs-bfshttps://towardsdatascience.com/search-algorithm-dijkstras-algorithm-uniform-cost-search-with-python-ccbee250ba9https://ai-master.gitbooks.io/classic-search/content/what-is-depth-limited-search.htmlhttps://www.educative.io/edpresso/what-is-iterative-deepening-searchWe add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:* Un-explored nodes - white* Frontier nodes - orange* Currently exploring node - red* Already explored nodes - gray 1. BREADTH-FIRST TREE SEARCHWe have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search.
###Code
def tree_breadth_search_for_vis(problem):
"""Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]"""
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the queue
frontier = deque([Node(problem.initial)])
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of queue
node = frontier.popleft()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def breadth_first_tree_search(problem):
"Search the shallowest nodes in the search tree first."
iterations, all_node_colors, node = tree_breadth_search_for_vis(problem)
return(iterations, all_node_colors, node)
###Output
_____no_output_____
###Markdown
Now, we use `ipywidgets` to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button **Visualize**, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
###Code
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
a, b, c = breadth_first_tree_search(romania_problem)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
2. DEPTH-FIRST TREE SEARCHNow let's discuss another searching algorithm, Depth-First Tree Search.
###Code
def tree_depth_search_for_vis(problem):
"""Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]"""
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the stack
frontier = [Node(problem.initial)]
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_tree_search(problem):
"Search the deepest nodes in the search tree first."
iterations, all_node_colors, node = tree_depth_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
3. BREADTH-FIRST GRAPH SEARCHLet's change all the `node_colors` to starting position and define a different problem statement.
###Code
def breadth_first_search_graph(problem):
"[Figure 3.11]"
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = deque([node])
# modify the color of frontier nodes to blue
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.popleft()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
if problem.goal_test(child.state):
node_colors[child.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, child)
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
4. DEPTH-FIRST GRAPH SEARCH Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization.
###Code
def graph_search_for_vis(problem):
"""Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
If two paths reach a state, only use the first one. [Figure 3.7]"""
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [(Node(problem.initial))]
explored = set()
# modify the color of frontier nodes to orange
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_graph_search(problem):
"""Search the deepest nodes in the search tree first."""
iterations, all_node_colors, node = graph_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
5. UNIFORM COST SEARCHLet's change all the `node_colors` to starting position and define a different problem statement.
###Code
def best_first_graph_search_for_vis(problem, f):
"""Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned."""
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue('min', f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < incumbent:
del frontier[child]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def uniform_cost_search_graph(problem):
"[Figure 3.14]"
#Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda node: node.path_cost)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
6. DEPTH LIMITED SEARCHLet's change all the 'node_colors' to starting position and define a different problem statement. Although we have a working implementation, but we need to make changes.
###Code
def depth_limited_search_graph(problem, limit = -1):
'''
Perform depth first search of graph g.
if limit >= 0, that is the maximum depth of the search.
'''
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [Node(problem.initial)]
explored = set()
cutoff_occurred = False
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of queue
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
elif limit >= 0:
cutoff_occurred = True
limit += 1
all_node_colors.pop()
iterations -= 1
node_colors[node.state] = "gray"
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
limit -= 1
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return 'cutoff' if cutoff_occurred else None
def depth_limited_search_for_vis(problem):
"""Search the deepest nodes in the search tree first."""
iterations, all_node_colors, node = depth_limited_search_graph(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
7. ITERATIVE DEEPENING SEARCHLet's change all the 'node_colors' to starting position and define a different problem statement.
###Code
def iterative_deepening_search_for_vis(problem):
for depth in range(sys.maxsize):
iterations, all_node_colors, node=depth_limited_search_for_vis(problem)
if iterations:
return (iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=romania_problem)
###Output
_____no_output_____
###Markdown
**TASK 2** **[10%]** Run and analyze the whole code that had been included in the assignment, Understand the code, and explain the working mechanism of each part of the code. **TASK 3** **[15%]** For each search method, explan the graph in the visualization part and the complete route taken to each the goal node. Compare the route taken in this notebook's visualization with the search methods's logic and see if it matches or not. **TASK 4** **[35%]** ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAVUAAAENCAIAAAAws2HjAAAgAElEQVR4nO2df0ya2Zr4Tzc3ucymvcUmxh+706HTiLhxI167KTZYMc0skDQVyKRXkyZqYFqNm2r36pY6sdaZBZ06WbCRUGfsYq8k2MkExcwNmptZcN1UugsLJnbhZdNKeyf8iDcXXU1gb7KX7x/n9v0ygAgIvPB6Pn9MXg7nPe+D0+c95znn+XEiEokABAJxLPkzogVAIBCEgfQfgTi+kEf/BwcHm5qampqatFpt6nf5fL7kHdxuNxxWJBLJ5XKz2XzomBKJxO12py4DAkEUPyFagOwgEonOnz+vVCoBAP39/W/fvh0aGkrlxt7eXr1en6TD3t4eAACO7HQ6b968ubq6WlFRkeQWDMPgXQhEgUMS/d/c3PzVr3518uRJAMCTJ0++/vprAIDP59NoNACA06dP9/b2AgC0Wu1f/uVfPn/+HLaYzebNzU25XD40NGQ2m58/fw4AqK2tvXbtWsz4DQ0N8L/T09Ner7eioiKm//7+vtVqffny5fvvvw8AcDqdKysr+HMBACqVand3FwAAX0xarbampmZlZeXQRyMQuYMk6//W1lY+ny+Xy202G51On5iYAAA0NzfX1tZyudzvv/9epVIBAD777DOtVsvlcufn51Uq1YULF8rKyrhcrtvtHh4e5nK5ly5dGhgYiF+922w2m80GLYuGhob4/hiG3bx50+PxwP7T09NcLtdut4tEIgAA/C+Xyw0Gg/B6enpaJpMBAA59NAKRO0gy/09MTJjN5l//+tf9/f2BQOD+/fs3btz47rvv6HS6z+c7f/48nHsBADMzMwCAW7duvX379uTJk6WlpQ0NDfv7+998801FRYXb7S4rK4tZvQcCgZWVFQCA1WoFALjd7srKyvj+ZWVl8L0zMTGhVCobGhpmZmbodDoA4NNPP4UrCKfTabFY4LCwMfmjEYicQhL9d7vdHA6Hw+EAAMxm882bN2/cuPHw4UMMw0pLSwEAFy5ciO5fU1Pz9u1b/OPe3l5vb+/29nZ1dXUgEIgZvKysDN9N0Gq1UqlUpVIl6Q/fKfi9NpttZWWlv7+/tLT0zJkzMZ2TPxqByClk0H+fz3f16tX//M//hPY/VHWbzYZh2NraGgBgcHAw+Qgajeb8+fNwI7CpqSlJT/jWSN5/e3vbbDbDl1EgEGhoaGhvb4fimc3m4eHhzB5ddIhEou3tbQAAn88/e/bsjRs30h3B5/Ml32pFHBEy6H9FRUVra+vPf/7z1tZWAIDFYrl//35lZWUgEJDL5cFg8NWrV0lul0gkHA7HaDTK5fLXr1/DEfAJHAAQCARwzQwEAt99992///u/x/RnsVjRYz569Oj58+dWq7WzsxMAUFZWNjo6WlJSYrVaYyb5s2fPJnl0UbO5uanT6QAAe3t78K2X7ivg0NMZxBE5QRr/X5/Ph2HYDz/8cOXKFThp+Hw+r9d76tQpOp3udrvpdLrNZsO1a39/H64XYKPb7d7b26usrDx16hQAAH4Fsdls+DV+e3x/+Aj4FXwWfHT0IA0NDfjj8K+SPLqogX92eC2XywEAQ0NDCQ9lfvazn/32t7/t7e2FpyRwsaBSqSYnJ/v6+np7e/G7Pv74YzqdDu0vAAB0x4BLLa1Wm8ES47gTQSByQ1VVldVqtVqtJpOJzWabTCbYaDAYrFarUCicmpqCLWKxeGpqCv7XarXiF2w2G8Mwr9fLZrMNBoPBYKiqqsIwTCgUwtHYbLZQKIxEInNzcwMDA8T+3mKEDOt/RMECz01ev34dCAR++OEHAECSQxmRSGS32z/66CP4EQBQWlpKp9Plcjmfz4eeEf/zP//z8OFDkUj061//Gk77m5ubAACz2dzT00PETyxuSHL+jyhMhoaGhoaGZmZmVldXP/vsMwDAw4cPm5qaent74zdl9Hp9SUmJWCym0+nRTtxWq7W2thZe19TUYBh248YNi8WytLTE5/Nra2vNZjOGYaTZN8knSP8R+QDDMBB1KKPX68+fPx/TR6VSTUxMrK2tffnll9PT03j7hQsX4DkOAGBlZQVutZaWli4tLXG5XJFI9OjRo+rq6nz9FFKB1v+IHBJ9bnLooczu7q5IJLpw4cLr16/5fD4AYHNzc3BwcGRkhM/nw0Ncg8GwuroKALhy5crk5OTMzExlZeVnn3321Vdf5feXkQTy7P8jCo2E5ybJD2XgUQh+bgI7w2/xAxR8TPwMJXoERFog/Ucgji/I/kcgji/I/j8Ev99vNps3NjZg3A6NRqurq+NwOEwmk2jRSIXL5bJYLBsbGw6HAwDAZDLr6upYLBaDwSBaNDKD1v/JmJ2d/eKLL/B/iwAAj8ezsbFhNpuZTKZCoaBSqUTLSAYePHjw7NkzFotVV1cHX6wOhwO+cwUCwcjICIVCIVpGcoLm/8T4/f6urq7y8vL19fWESj47O9vY2KhWq6EXCiIzXC5XV1cXl8t1Op3R7fhfdXx8vLGxUaPRoAVXTiDY/7AgCYVCTCbTaDQm7+bz+TgcDnRERWSAz+djMpnr6+vJuzmdTgaD4fP58iPVsQKt/xNw796906dPS6XSQ3t6PB4+n2+329ECNQPa29tbW1vb2toO7Wk2m0dHR00mUx6kOlag/f9YzGazxWJJRfkBADQa7e7du8jzPANmZ2cpFEoqyg8A4HA4DAbj8ePHuZbquIHm/1iEQmFHR4dAIEj9lnPnzplMJhqNljupyEd9fX1aVr3f76+vrz80XzsiLdD8H0t8Mo9DEQgEi4uLOZKHlITDYZfLldaWXnl5OQDA7/fnTKjjCNL/HwH/ecF/aqlTXV0N41sQKeJwODLYz2exWHj2VERWQPr/I1wuVwYOJ//7v//79ddfn0CkTGNj45//+Z+n+3dmMBgulyvduxBJQPr/I2g0Gp7DP3V++tOffvLJJ0Qf5RQTJpPpj3/8Y7p/58zezogkIP3/ETQabWdnZ2dnJ627MAwrkPjzYikfwmQyoZ9vWmRmNSCSgPQ/lgyMzOXlZR6PlyN5Usdms129ehW/pr8DVhwqKKhUanl5eVqLefheRocs2QXpfywdHR1Pnz5Nvf/y8jKNRiuEdWl/fz9+7XQ6W1tb3W632+0uzBTa6f6dlUplX19f7uQ5niD9jwV6pMzPz6fSeWdn586dOzA1NbFIJJJbt27hHzc2Nmg0mlwuT6saej6RSqXQ1SqVzg6HY2VlJUWnLETqIP1PgEKh+OKLLw5dnYbD4Z6enrt376Z7Xph1lpaWwI+ra7x69cputwMAzGazRCIhTLKkaDSanp6eQ4/0d3Z2enp61Go1crLOPkTvBBcodrudyWSOjY0d1GF9fR2GAOdTqoNgs9l7e3uRSKSqqir+26qqKq/Xm3ehUsJkMiUPtTIajQwG49BYLERmIP/fAwmHw6OjoxaLpbm5GSb8oFKpFosFhqY7HA6NRlMIZr/b7RaLxbDM6cLCglAo1Ov1eAFCAACdTodZ9wkV80BgqDWVSr148SKTycSLuDocjhcvXuzs7Gg0GsJXWGQF6f8hmM1ms9m8urr6/PnzP/zhDywWC6YD6e7uJlq0BOAlt5qamm7dunXjxg2tVjs9PY3nzy5Y5ufnX7x44XA48JJeTCbz4sWLKQYIITID6f/hwBwVFArlwoULExMTRIuTDJFIBHf73W63VCrd3t4uLS0tzP3/hFgslnv37hmNRmTq5wek/4cwPj7+7NkztVo9OTmZYrA6IjN2dnZaWloWFhbQIX/eQPv/B+L3+1taWnZ3d9fX11ksFnI+yzVdXV0jIyNI+fMJmv8TAzN/4un9wuFwSUlJKBQiWi7SolQq37x5o1AoiBbkeIHyf8aSMPMnijzJKRaLxWAwGI1GogU5diD9/xFw2lcoFDH+/Gjxnzuge8/CwgLa88s/SP//xM7OTnt7+0EJvzc2Nurq6ggRjPQgs59Aimz/L0fp35aXlxsbG/v6+jQaTcJs/2j9nyOUSiWNRksr2yIiixTZ/H/9+nWlUtnQ0GCz2drb22FjbW1txkfcMIDH7/ebTKYkTmZo/Z8LkNlPOMU0/0skkkAgAK+zEt8Kp/3m5maj0ZhE+TNLCohIDjT7NRoNMvsJpGjmf61WW1JSUlZWBj/i8a1nz56NjntLkRSnfQia/HMBMvsLgeKY/30+n16vj/a9PUp8q8Vi4fP5h077OMj4zzrI7C8QimP+7+3t/fTTT6Nbotf8dDrd5/NVVFQcOg4M6TObzWmF7m1sbDQ3N6clMCIJyOwvHIpj/gcAyGQykUgUCAT6+/vdbjeMEsPZ29s7dASLxdLY2Hj69On19fW05nO0/s8iyOwvLAjNPpA2bDbbarXCi7m5uUgkMjc3x2azk98VCoWkUimLxXI6nRk8lEKhhEKhDG5ExCMQCBYWFoiWAvEnimb+h1RXV586dQoA8OTJE71e39TUpNfrkwe3u1yulpaWDKZ9/HYajYYmq6yAzP6Cg+gXUG4ZGxtjMpl2uz3jEXQ6XVtbWxZFOrasr69zOBy0kiooimz+Tx2Xy9XY2Aijd49ivSPP36yAzP7CpDj2/9MFT9qRbiXfeFwuV0dHR1akOs6g0/7ChGzzv8fjwaf9oys/QJv/2QCZ/QULqeb/2dnZycnJrEz7EFRz6uig0/5ChiT6jyftMJlMCQP4MgN5/h0RFNtf4JBh/T87O9vS0pIkejdj0OL/iCCzv8Ap7vk/Ya6uLII2/48CMvsLnyKe/5eXl/l8fi6mfRw0/2cMNPvHxsaIFgSRjKLM/4tH7+a6MlRJScnW1laOXi4kBmXyLxaKb/5PMWnH0fF4PFQqFSl/BiCzv1goJvs/raQdRwct/jMDmf1FRNHM/xaLpaWlJQ/TPg46/MsAZPYXF0Uw/+NJO/JsT25sbLS2tubtcSQAnfYXHX/mdrubmppUKhXe5PP5mpqaYBnpGFQqFewpEokAABKJJGG3LBKdtCPP9iRa/6cLMvuLjp/s7e0FAoH5+fne3l7YpNfr/+3f/i1hRp3d3V14ceHCBQAAhmGpJN7JDHza1+l0+V+Hh8Nhj8eD1v+pg8z+YuTPAAAwqS5eWmN+fp7NZsNrrVYrl8vlcvlB87zT6ZTL5fjyQaVS7e/vAwD29/ej1xTp4nA4MsvVlS2Q8Z8WyOwvUv60/8disWBGTZ/PV1paChtVKpXZbOZyuWfPnr169Wr0bbOzs/Bienqay+Xa7XZoEczPz2MYBgDAMGx+fj4zmcbHx3t6enQ6nVQqzWyEo4MW/6mDYvuLlz/p/yeffALVVa/XQ00GAHz00UczMzMNDQ01NTUH3Q+r8czMzGxubh5dGjxph8lkInb6RZ6/qYPM/uLlT/v/dDodAODz+b7//nu9Xj89PQ0A8Hq9YrG4tLT0/PnzCW8uLS1taGiA12VlZTab7SiiZDFpx9FxuVxcLpdoKYoAZPYXNf///L+trW14ePjMmTN4y/Dw8ODgYEzhjWi2t7fxPNyBQAB/F4DUEnLjROfqKgTlB2j9nxrI7C92/v/5v0gkmpyc/Oqrr/CW6urqpaWlzc1Nq9UKAEi4Bfjo0aPnz59brdbOzk4AAJ/P7+/v5/P58JZUmJ2dnZ6eVigUBaL5ABX8Sw102k8GIpEIhmEwGShMrR/TAhthy97e3t7eHt4TNlqtVrx//C1J8Pl8PB6vs7MzGAxmMaXp0TEajTwej2gpCh2UyZ8EEBb/Nzs7+8UXXygUCh6PR4gASVAqlW/evFEoFEQLUrigPxE5IMD/N9dJO44OKviXHJTSjzSkFP/j8Xju3bvX0tLy3nvvnThx4ty5c0Kh8PHjx+FwON3nLS8v5yhXVxZBm39JQKf9eWBxcbGrq6u+vv7EiRMnTpxobGzs6emJqXmZFQ5f/4+Pjz99+rSjo4PFYrFYLAqF4vF4HA7HysqKw+FIvZAuHr2r0+kKVvMh7733XjAYRP++EyIUCjs6OtCBX46Aq2MKhdLa2spkMuE8ZLFYHA7Hs2fPmEzm2NhYNv9lJtkb2NraYrFYUqn0oJpNdrudyWQqFIpDtxmMRiODwdBoNBltUuQVp9PJYDBiGr1e79TUFL4/GolEMAyTyWSH7nGSDIVC0d/fT7QUpAWGuhiNxoM6KBSKI9azi+FA/Q+FQiwWy2QyJb8/FAq1tbUlUexgMNjZ2cnj8Xw+X+Zi5pH4gn8mk6mqqkomk4nF4oGBgUgkYjAY2Gy2TCZjs9kGg4EgSfPNQQX8qn4MhmFWqxX/KBQKCZG26IDOL4eehTmdTiaTmS1tOlD/R0ZGRkZGUhkiGAwyGIyEAkE33qKY9nGkUunY2Fh0y8DAAK7kVVVVkUiEzWbDmd/r9R5afZwcBINBJpO5tbWVpI9YLJbJZJFIZG5uDr4oESkSCoWYTGaK9emzWJM2sf7b7XYWi5V6qdb4A/NQKCSVSjkcTrFM+zhJjrWnpqbgbAbfApDoaxJz6Gm/yWTCX4UDAwNTU1MymWxubi4v0hU98bNOctra2nQ63dGfm3j/f3p6uqOjI/VtBh6P5/F4XC4X/Ign7chPor7sctDmv0QimZychIkPjhupOPk/evTo888/h9evXr2y2+0AALPZLJFI8iFikaNUKvv7+1Pv39fXNzk5mYUHJ3wrZLDHAF9IcNpnsVgprmQKjWAwSKVSk3Rgs9nQuMVbSD//H2T2R4Nh2EF/h6qqKq/XmxvRSALcR0/rllAoRKFQjv7oBPN/ZqlvLl68uLi4SGzSjqOTMO3H4OBgTOxDWVkZni4FZk8hKyme9n/77bfRuRJjTqpzlySKHGTgb0KhUBgMhsPhOOKjE+i/3++nUqnpnjGWlpbCXF0EJu04OhaLJT4Mqa6uTiwW22w2lUoFwxz5fP7w8LDNZpNIJHw+nxBR88OdO3f6+voOje03Go1tbW34x+HhYa1WCwDQarVlZWUwuhwRj9lsvnPnzp07d06fPp3uvTQazePxHFGABP6/5eXlOzs74XA4rVfA9vZ2e3t7kU77OBiGxaf9uHHjBgBArVZ/+OGHq6urAIChoSGtVqtWq+vr6/G8ieQDZnmCkZ3Jqa6urqysxD8+efJEKpVOT0+Xlpaura3lUMQiJBwOLy8vGwyG5eVlBoPR2tr66aefvnz5Mt1xPB5PFnKuJLQKMrb/j26QEAuLxVpfXydaioIAnjOnfgaESE4wGNRoNAKBgEqlCgQCjUaDH40RaP8n1v/u7m61Wp3WQAwGo0j3/KKhUqmFFoxMCGkdRyOSsLW1pVAoOBwOlUrt7OxcWFhI+EqlUChpvWqhs9DRxcvV+X8xktDz93jS2dlZXF5bhYbdbh8ZGWEwGDQarb+//1A/WqLO/3Pr/1dcLCwsCAQCoqUgHo1G09nZSbQURYnRaOzu7qbRaEwmc2RkJPUFVGH5/0Wy5/9fRIyNjUmlUqKlIBhk9qdLKBTS6XSdnZ1UKpXH46nV6uRe0gdRWP7/kazG/xUFKKEVMvtTx+fzqdVqgUBAoVDgFHj0naP8x/+lGv8vEAjq6uqYTCaDwTCbzQ6HY2Njw+VypR7/X/jU1NQsLCyQ5udkQFdXV3NzcyoHfscWl8u1vLz87Nkzj8cjEAi4XG52UyHA+H8AwMWLFzkcDpPJDIfDDofDYrGsrKywWKzsxv+nlP/P4/HMz89vbGzYbLb//u///qu/+qu//du/raura2trI02SjHA4XFJSEgqFiBaEMGZnZ1dXVzUaDdGCFCIOh+Pp06fLy8sAAB6P94tf/CKn6aqXl5ctFsvq6qrD4aBQKEwm8+LFizweL/sPTXfBsLW11dbWlsrWQHGRwRksmUBmf0IWFha6u7vLy8uhnUs+yyht/YfAmBCBQECav4hare7u7iZaCmJAZn80wWAQbrBTKBSBQKBWq0lwtnUQGeb/hfP/4uKiUCjkcDgjIyNFF+cbA4Zh1dXVREtBDD09PX19fcd54wMA4PF4oFuuxWLh8Xitra3HIcdpSvl/DwLO/3V1dS0tLQ8ePMggHXDhcGwLfqfu5E9KXC7X+Ph4fX19S0vLxsZGX19f9PyfyggSiaSpqampqWlwcFAulycskwUAUKlUKpUqRamWlpZEIlFTU5NEIsEjTZPLcNBzk3Mk/Yd0d3evr68DAGpqah4/fnz0AQkhYeQf6XG5XJOTk2q1mmhB8g0MvIOZ7Hd3dzUazdbWllqtzqAaDYZhLBZLqVTCCMirV68m1Njd3d3d3d0UZRsYGLh9+7ZSqayvr0+lFEV9fX10/FXqZKf+B5VKffDgQXd39+joaE1NTWFW9UmC3++nUCgFnpU864TD4fb2dp1OR/pVLiQ+8M5kMmWlbHlJSQksftvQ0GA0Gr1eb0VFhVarffv2LQDg448/xiOg9/f3rVbrDz/88Pbt248//hgA8O23354+fTo6ivT58+etra0cDgcOaLfbfT7fqVOnrFbry5cvd3d3L126xOFw4FAvX758//334Y1ms7mysvI3v/nN7u4u/lBYwvPs2bO7u7sdHR0nT55UqVS7u7tnz569ceNGFuZ/nPLycrVavbCwMDk52dLScvTkBHnjeBb8OCZm/87OzuzsrFAorKioePr0aXNzs91uN5lM/f39WVH+aPb39wOBwKlTp1Qqldls5nK5Z8+evXr1Kt4Bw7CbN2/u7u7W1tZevXr166+/5nK58/PzMF0CpKury2AwSCQSrVbr8/lmZmYqKirgjQCAS5cu3bx502w2wxaYAmB+fh7DsOfPn4vF4vfff//06dPwoUtLSxMTE1wud2Nj4+/+7u8wDIPp2LhcrtlsVqlUGe7/H4rJZGIymW1tbZn5QuaZY5jWnvRO/ikG3h0dNpsdnf4cJj6OrqALM6PJZDKZTGa1WvEsqXhmNPhVzLAw2SyeQD36RpheOboFpqWLHgd+hSeqho+zWq1CoVAsFuON2Zz/o+FwOHa7vbW1lc/n37t3b2dnJ0cPygovXry4ePEi0VLkDxKb/Q6H48GDBzU1NS0tLW/evBkZGcED73Nn5nR2drrfMTExAQDwer1NTU0ikWh+fj7JjRUVFfGNcCevt7dXr9e73e4zZ87AjUN8f6qmpsZisSQXqbS0FAAQCARw0wMmqtPr9SUlJWKxmE6na7XaXOk/pK2tzW63nz59ur6+XqlUFuwBwbHa/Cel2b+8vNzT03Pu3DnoPLuwsIDP/4TIMzw8PDg4qNfr4esgLR4+fCiXy/GPJSUl8ALX+fn5+RSzztXW1i4tLYF3hgkAQKVSTUxMrK2tffnll9PT0zmv/0uhUKRSKdwarK+vHxkZiU4UVyAcK/0njdkfDocXFxdXVlYWFxdZLFZra+vdu3ezbtJnRnV1Ndx4s1qt4N2UniKff/759evXrVbrhQsXrFbr9vb2xMSEzWYLBAISiaSkpMRisRiNRgzDDh1qfHxcLBavra29evUKAFBZWbm7uysSiS5cuPD69Ws+n58r+z8hhek7fKw8f0lg9uci8O4oYBi2t7cX3261WmHBSGhs7+3twW7RWwPwAv8q5vbokpPQ2oe11aIfHX0RPQ5sgbJZrda9vT18swAOAjtkWf/n5uaEQuHAwAAumcFggC14n0LzHc5iNoUCp6id/J1Op0KhYLFY5eXl3d3dxy1SO3q3L3Vg0Uq4NZiwEGM29X9ubg7uQ8LymJF3NaHgmyzm8TDStru7m3Dn6nRTLxUpRerkb7fb+/v7GQwGg8Ho7+8/ttlZ9/b2pqamMrhxbm5OJpMddG829T/6XAGqvVgsxivARR9F4KjVagaDMTIyQuCkdEzSfhRXSj/SB94VCNnc/5+ZmYGHDT6fD9bJwDCspqYG7xBfB6YQfIePg+dvUTj57+zszM/Pt7e3v/fee0+fPq2rq7Pb7fj8T7R0JCXrbxS40wCnfbgKgO3R1/H4fL7u7u7kyY9ywaEF/0hAgZv9uOM9lUrFq0gSLdRxIcv6D72dotf8Keo/xOl08ng86DuUXcEOIlt51AuWgjX7nU7n2NgYk8mk0Wjd3d1ZfO9X/RgMw2QyGf4x3tPuOJNN/ccwLEbJYR14eJ367mU+fYdJ7/lbaGY/7njPYDCkUmlOX/RisRhqu1gsNhgMuXtQ8ZJN/x+pVBoIBPAy5k+ePPn7v//769ev7+7uvn79OnUbG87/0MlJIBDcvXs3d5F5CQv+ZR232/3tt99yuVwYJQYAsNls8OLUqVO5K49ZIGY/Hni3uLjIZDKzGHiXBBghMzMzAwCArjJyuRxGzuX0uUVGFt8l1h+Dt8/NzR268k9IKBQaGxuj0WgKhSJHNmEeCv7JZDI2mw1PYqEfBLSS2Gw2m80Wi8U5ei7hZr/P54upeJdPRx2hUIi7mcGwHPg/AjdOEZGs2/+5IBgMwh3gXNQXzUPBv6qqKtwrC4aCTU1N5doKJdDsxx3vy8vLcxp4lwQMw+CfOsX2Y0tu43+yApVKVSgURqPRYDA0NjaazeZsjexyucrLy3Od9sPtdp88eRJewBgsu90eDAZhdqccPTT/Tv4Oh+PevXvRgXf4/J//QKNvv/22tbUV/4j/mzl16lSeJSl0iH4BpUdWfIedTufIyIhAIKisrPzJT37C4XCkUmkeipez2Wy4CyUWi6Fr99TUVAZOnQeB/66/+Iu/+OlPfwrzsub6PDW64t3Y2FiBHDRE70N7vV4Y+h6JRHATDAEpMv2HZOw7HAqFpFIp9DiEIaKhUMhkMo2NjcHApBz984XO2wktT3hAdcTx439XMBg0mUwjIyM8Ho/H42XXyRpWvGtraztixbvcIRaLYWoNCAxCYbPZSPljKEr9h6TrOwzj/JK4+q+vrzOZzKyflkHljw55jP5XiCeByZhDf5fRaMzK7klM4J1OpyM28A5xdIpY/yORSDAYHBkZodFoarX60J4MBuPQ6T0UCmU9HABu8sveEYlEYA4mGB9xxP1/n8+Xyj5fMBjkcDiZhV3DwDsmk3k8A+/ITUr1/wocv98/OjpqNpuT5B1OvbKl3wGizPwAABgTSURBVO/n8/lGozErFU329/cfPXoU3TI0NAQAUKlUdru9vr4+OvFrBvD5/L6+vlSyLfv9/paWlvX19RT3Oy0Wy7Nnz/CKdx0dHccwRSr5IfoFlDWS+A6nG+FvNBp5PF5WpcsJ6SbzSKU/HnjHYrFQ4B3pIY/+QxL6Dmfg5FNeXk54YoJDyaAUPI1Gi9+rgxkyj0nFO0Q0RXD+nxbxeYd3dnZcLle6Eb4sFuvQFKvEEg6HPR5Puif8HA4HPwz3eDyPHz/m8/nnzp1bXV199erVz3/+89/97ndWq/Wf//mfYTe32310J4X46lQikeiIYyKyQs7zfxJCW1ubQCCA5ZMEAkEGDvZ1dXUOh0MgEORCvKzgcrloNFq6rjXV1dXPnz/3eDwGg2FnZ4fH4/X19RmNRgAAnU6/f/8+zNfgdDpv3ry5urpaWVlZX19/RFHjq1Ntbm4ecUxEViCn/oOovMOffPJJBtUHSkpKfvnLX46OjuZCtmzBZrPTveUPf/jDN99809/fr9Fo4vfzampq8DpW09PTer2+o6MDvPOfg5Ez+DWsb4XXrtJqtTU1NSsrK0NDQzFf4eC1qPAWs9n8/Plz8G5bFJFnyLb+j4FKpf7yl788c+ZMujf6/f5//Md/JNo6S4bdbt/f30/3d/3xj3/s7+9/8OBBws18p9Nps9lsNtvS0lIgEBCJRBiGwQoW+CkGvBgcHIT1rTweDzQQpqenZTIZAECr1cKvwLt1PqxOpdVqYS0q3ABZWloaHh6GRbKamprS/S2ILED0P+Ock1mGHx6Pl+dMRBlAoVDSDa1Jkg4AD0mENaegswCedpbNZnu9Xq/XCz9WVVWZTCYY6AkjanCXW+jUHO1rAL+K9snFb4F+0PEuUoj8QNr1Pw6VSmUymWazOa3Ab4vFotPpcidVVmAymQ6HI62tzcXFRYVCcdC3SqWyoaHBZrO1t7fHfMXn8/V6PbyALcPDw/CirKwsensPrvmHh4cDgUBra2vCAji1tbXwYn5+Hq+Q9fLlSxScn2+IfgHlA5PJxOFwUu9fLEUy0vVTGBsbk0qlB32LB8lE3qVyj0TN/3Dmh6uAyI+DmqFTIz69GwwGvBxF9NJAKBTiSXhge3TL1NRUwioaiJxC/vkfAMDhcJ49e/b48ePu7u5DO/v9/i+++AJmJS5weDzes2fPZmdnU/FrdLlcz549S/F33bhxQ6/XDw4O4sXaKioqYElJWLLy/v37fD6fz+e/fv065t6f/exnYrGYz+cHg0F8ngfvalFtbm7it+AtwWDQYrEc0RUSkQlEv4DyRCgUYjAYh3oBQT/5wrf8cWBcw6G/y+fzHeoEFZ+jKbp8VSSuTBWGYfhUH90tEol4vd7oHFD4V7Dd6/VGt8B9hOTyI3LEcdH/SCSytbXFYrGkUulBe2bZipPLM06nM/nv0mg0DAYD7a4h4iFD/E9ajI+PP336lMPh1NXVwZg2h8PhcDhWVlaoVKpGo8lK2E/+gb+Lx+PB30WlUuHvMhgMsIROrtMcIYqRY6f/AACPx7O8vLyxsfEf//Ef//Vf/8Xlcuvq6ng8XrFXAfJ4PIuLixsbG6urq9CQqaurEwgEKG4PcRDHUf+jefz48cbGhlqtJlqQbGI2m0dHR00mE9GCIAodkvv/HUp3dzeFQlEqlUQLkk2oVGoGLs+IY8hx138AgEKhWFlZgYkuyEF5ebnf7ydaCkQRcNzX/5CdnR0+nw/3yYmWJTucOIH+zyIOB83/AABApVJ1Ol1XVxdppk0ajebxeIiWAlHoIP3/E7DKWHt7ezgcJlqWLIC2ABCpkB39P3o6F7lcfvQ8M0cUg8Vi3bp1q6ur64hiFAJoCwCRCtnR/6OnczEajf/6r//q8/mIFaOtrY1Go42Pjx9xHMJB+o9IhRzG/6hUqt3d3dra2mvXroGoTC9dXV0whgTHbDaXlpbCHTiYB0ar1V65ckWj0YCozDBwwEuXLj1//hzvljDPTExWGViIDgCQSvnnsbGx9vb2+fl5PPSlGPnggw+Q/Y84lFzZ/zCdC5fLnZiYUKlUPp8PZnqpra1tbm6O6fzo0aPbt293dXXBRHQAgOnp6d7eXi6X+/r1a7iqHxwctNvtXC730aNHsKx9fAoaSHxWmevXr9fW1nK53OHh4ZhElAnRaDTT09MFnv8zOVQqdXd3l2gpEAVPVqIIYmoq40HjkXdx47Du8kGl1/Hb8SQweDB5dMIZfEB4HZ+C5qCsMlVVVTKZLK0yWzBgrtDK2qVOuiUPEMeTnKz/LRYL7ktfUVERCATodPqXX345Ozv72Wef1dbWwkwyEJVKBaK27h49ehS9RIdh53jlbPAu/hxyUAqamKwy33333cOHD69fvw4AePLkSSrpgMvLy9VqdXt7u8lkyn/56qOD7H9ESmTlLZJk/jeZTHD+xzO9ROeBgx+jQ8dhPczoPkKhMBKVcAbO55FEKWgOyiozNTUFPw4MDMCeKVIshYDi2draotFoREuBKHSyNv/j+VtLS0v1en1paalEIvnwww+NRuPg4CCdTheLxWtrayUlJQAAmGQaAOB2u+HqAB/n8uXL//RP/xQ/Pkw4w2KxXr16BdcCB6Wgic8q8/3339vt9g8//NBisXz++eep/ygej+dyue7cuZMkZ15hgs7/EamQHS9Rm80W/RGqN2ysrKzEV+ywBVd+AMD+/r7X643Wf9gCAMAb3W43nU632WyVlZVer7eysvL69etra2vwK5fLxWAwYGebzQYH9/l8GIadOnUKf1a8MKnT09NTV1eXSu6wguK9994LBoPFaLwg8kbReImLRKIrV66wWCy1Wv3hhx/ms1xEOBxub2+/detWKmV2C4dz586ZTCYajUa0IIjCpWj8f+GW4crKCofDyXOtGAqFotFoRkdHXS5XPp97RFAIAOJQiin/L4H5YWGAEKyKUSyJtNAWAOJQimb+J5yiCxBCR4CIQ0H6nwYcDucXv/hFT08P0YKkRFlZGdJ/RHKQ/qdHZ2dneXl5UQQI0Wi0N2/eEC0FoqBB+p82Y2NjL168WFxcJFqQQ0D2P+JQimn/r3DQ6XQtLS00Gq2QU2sj+x9xKGj+zwQKhaLT6e7cuVPIB2xI/xGHUjT+PwWIw+Ho6ekp5AAhlAUUkRw0/2cOk8m8e/due3s70YIcCNoCQCQH6f+REAgEFy9evHfvHtGCJAaZAIjkIP0/KlKp1O/3w5REhQbSf0RykP5nAbVa/ezZM7PZTLQgsaAQAERykP5ngYI9DkD2PyI5SP+zAwwQam9vLyh9KysrCwQCREuBKFyQ/mcNBoMBc4cXToAQsv8RyUH6n004HE5ra+udO3eIFuRPIPsfkRyk/1mmu7ubQqEolUqiBQEA2f+Iw0D6n30UCsXKysry8jLRgqD1P+IQkH9oTgiHwy0tLWq1mvAAIeQCjEgCmv9zAjwR7OnpIXz6RVsAiCQg/c8VBZIvDG0BIJKA9D+HsFisW7dudXV1ESgD2gJAJAHpf25pa2urq6sjMEAI6T8iCUj/c45UKvV4PHg90jzzwQcfIPsfcRBI//OBRqOZnp62WCz5fzSVSt3d3c3/cxFFAdL/TBgcHMSvVSqVSCSCVcwPaiEwQAit/xFJQPqfHvv7+xKJBJ/J5XK53W7/9NNP7Xa7XC5P2AIpLy/XaDT5DxBC+o9IBpHFx4sQNps9MDDAZrPxj16vF15XVVUlbInGaDTyeLx8CRuJRCJbW1s0Gi2fT0QUEWj+T4+1tbW2tjb8YyAQiCkoHt8SDY/H43K5+QwQQuf/iCQg/c83/f394XD48ePH+XkclUoNh8NJfJDcbndMi81my7FQiEIB6T8BqNVqg8GQtwChg7YAzGZzU1OTWCxuamqCOu92u5uamvr7+5uamuLfCwgSQrQBUnxYrdZo+x/DsEgk4vV6YWN8S0KCwSCLxXI6nXkQmMPhmEym+HY2mw3b8V8kFAoNBkMkEjGZTEKhMA+yIYgF6X/aROu/wWBgs9kymYzNZkPNiW85iK2tLSaTGQwGcy1wW1ubTqeLb4/enoTX8S0IcoPW/2nT0NDw5MkTeH3t2rVvvvnm7NmzT548uXbtWsKWg6DRaGq1WigU5jpA6KAtwLKyMpizGC7+kdl/DEHB4QQzPz9vMBh0Ol3uHvHgwQP8v9HYbLb29vba2tozZ85gGLa2tkan03GzP/oaQVbQ/E8wbW1tNBptfHw8d4+g0Whv3ryJb29oaHC73Xq9fmZmBqUJPp4g/SeesbGxFy9eLC4u5mj8g/b/RSIRXP+rVKrLly8DAC5fvgzdlvEWBLlB6/+CIBwO8/n8sbExFouV9cEtFsudO3fW19dj2t1ut1Qq3d7erq6unpmZgY0SiQTDsNLSUr1en3VJEIUG0v9Cwe/3C4VCnU5Ho9GyO7LH42lpadna2srusAgSgPS/gHA4HD09PSaTiUKhZHdklAUUkRBk/xcQTCbz7t277e3tWR8ZRQEiEoL0v7AQCATNzc1ZDxBCUUCIhCD9LzhyESCE5n9EQpD+FyIKhcJgMMDDuayAqgAgEoL0vxDJer4wNP8jEoL0v0ChUqkLCwtCoTArdvvp06dRFlBEPEj/C5csVhBC8z8iIUj/CxoOh9Pa2nr04wBk/yMSgvS/0Onu7qZSqUcMEELzPyIhSP+LgKMHCKHzf0RCkFtocRAOh1taWtRqNZPJzGwE5AKMiAfN/8UBPBHs6enJeBmPtgAQ8SD9LxqOmC8MbQEg4kH6X0wcJUAIbQEg4kH6X2QIBIKLFy/eu3cv3RvR/I+IB+l/8SGVSj0ez/z8fFp3ffDBB8j+R8SA9L8o0Wg0T58+TStAqLy8HCX5RMSA9L8oySBACNn/iHiQ/hcrVCpVp9O1t7enqNXI/kfEg/S/iGEwGCMjIykeB6Dzf0Q8yCes6Hn8+PHGxoZarU7eLRwOl5SUhEKh/EiFKArQ/F/0dHd3UygUpVKZvBvMKZzrWoOI4gLpPxlQKBQrKyvLy8vJu6EtAEQMSP9Jgk6nGx0ddblcSfqgLQBEDEj/SQI8Dujq6koyw6P5HxED0n/ycGi+MOQCgIgB6T+pYLFYt27d6urqSvhtWVkZmv8R0SD9JxttbW3V1dUPHjyI/4pGo7158yb/IuUa+o8RiUSwMaab2+0WiUTRfQ4aLbfiFhJI/0nIgwcPMAyLDxAisf2v0+nc7ziocvnVq1c7OzthnytXrhz0CtDpdLmUtLD4CdECIHKCRqPh8/k0Go3FYuGNx9n+V6lUly9fvnbtGvzY29sLL5aWlmZnZ+E1fHHIZDJ4IZfLrVYrAOD27dscDkcikXA4HL1ev729rVQqGxoaCPgZWSeCICk+n4/FYm1tbeEtW1tbNBqNQJFyRFVV1cDAgOwdJpMJNkb3YbPZMpks4b17e3uRSASOgN8ok8nEYnEkErFarVVVVV6vl81mw5apqSk2m537n5UP0PqftJSXl6vV6q6uLnzOR+f/8bjdbgzDtFrtq1evotuNRuM//MM/AAAaGhpaW1vhiqCnpwcA0NvbW1paSoi0WQet/8kMni/MaDTCFrgFUF5eTqxgWaetrS35gpzP5weDwegWiUSiVCr5fH51dXV9ff358+djbjl16hS8KCkpIWv1NDT/kxwej8flcvEKQsd2C6Crq8tgMNhsNvhRq9ViGIZhGABgZmamt7c35u1QWlqK7yMajcaPP/44zwLnBzT/k5/+/v6enp7Hjx93d3fD+Z/BYBAtVJaJCYJ2u90xHSoqKr788sv+/n685cmTJ/CoDx4EnD9/Hu72QVQq1fXr17///vvt7W0Wi0Wn00mz5o8Gxf8eC8LhcHt7+61bt549e9bc3NzZ2Um0RITh8/kqKiqSdKDT6fGvD7KC5v9jAYVCgSeCf/3Xf01WF4AUSaL8+/v7o6OjtbW1+ZSHWND8f4zweDwXL1786KOPtFot0bIUBDBcMtoastlsJDnYTw20/3csmJ+fb2xsPHfu3P/93//95je/OXHiRE1NzYMHD45nOhClUllfX3/ixAmhUCgUCk+cOFFfXw8TqBwr5QdI/0mP3+/n8/kGg0GhUEQikd/97neBQCASiSwsLAAAGhsbLRYL0TLmD5fL1djY+ObNG41GE4lEnE6n0+mMRCIajebNmzeNjY3JEyiQECKdjxA5RqfTMRgMo9F4UAen08liscbGxvIpFVEoFAomk7m+vn5Qh/X1dSaTCV8NxwSk/6TF6XQymcxgMHhoT4FAoNPp8iASgZhMJg6HEwqFkncLhUIcDgd6EB8HkP6TFhaLlWSui8bn8zGZTJ/Pl2uRiCIUCjEYjOhQiCRsbW0xGIxD3xTkANn/5GR8fJzD4UQH/yWhvLz87t27uI8g+bhz505fXx+NRkulM41G6+vrI/FfIxp0/kdO6uvrNRoNk8lMsX84HK6oqPD5fDBNOMkoKSnZ2tqiUqkp9if3XyMaNP+TkHA47HK5Uld+AACFQqHRaKTc/Xa5XOXl5akrPwCAQqFwOJxD86mTAOT/R0IcDkdayg9hsVi/+tWvyBcd9C//8i8Z/DVInCspGqT/JMTj8aRo60YTiUQWFhbsdnsuRCKQt2/fNjc3p3vXMcmVivSfhDCZzNHR0XTvCgQCCoVCIBDkQiQCWV5enpycTPeuN2/eZPDWKDqQ/U9CGAyG3+9PdyWfmdVQ+LBYrAx8HMn614gB6T85YTKZDocj9f47Ozs7OzsZWA2FD5VKpVKpaSU+8/v9fr8f6T+iWBkbG7t3717q/e/duzc2NpY7eYgl3b/GnTt3FApF7uQpHJD+kxMWi8XhcMbHx1PpbDabXS5Xd3d3rqXKGLPZjF/b3pF6lo62tjYAQHxBhITAbvAW8kO0AyIiV4RCISaTmST4B2K325lMZoq+sYQgFovx1N0wGzebzcazcacI9HE+1CF6fX2dxWKR2BU6BqT/ZMbn8/F4vM7OzoOigMbGxlgsFoyBLUBg1v3o1P1TU1MJ0/inAgx2lEqlCX37Q6GQVCqNqZhAetD5H5kpLy83Go2zs7ONjY0sFqu1tRW6wYXD4ZWVleXlZYFAYDKZCtbL9dSpU998841Go8Fb7HZ7SUlJU1NTdXX1zMxMWqMxGIz19fXx8fH6+noOh9Pa2gp/+M7OzsrKitls7ujoWF9fz/JvKGyQ//+xwO/3Ly8vGwwGeChIoVC4XC6PxyuKRMByuRwAMDQ0BACQSCT19fXwSG9+fn5tbS2DAT0eD/xrwPRHVCoV/jVIefyRHKT/iEInWv+jodPp33333bEq15t10P4/opgYHByM/oiX6EFkBtJ/RDHx6tUriURis9kkEsnly5eTZ/JHHArS/6JBJBLR6XQ6nS4SiTIoUBF9e1NT09LSEmwv/PXzpUuXLl26BK/1en19fb1ara6vr093/w+RAKIPIBApIRaLBwYG4PXc3FwG9aejT9EMBkNVVRXMcme1WrMoJ6K4QOd/xcHvf//7+vp6eH3jxg28HK1EIvn973+/vb1969atGzduqFSq06dP6/V6vCXhaNeuXfvtb3/76NEjDocjk8lgoUs4FAAAr3uJF8Z79epVTB+VSoXW3mSA6BcQIiUwDIOeMAMDA3h22oGBATile73eqqqqSCQik8nYbPbe3h70k4seIXr+j0QiVqsVLiJgN6FQODU1FYlEDAYDbBcKhXNzc/ApeB/YktkCBFGAIPu/OKDT6Wtra0qlsqSkZHh4uKmpCQAwMTHR1dVls9miPWT4fP7JkycbGhrKyspSH39zc3N3d1cul29ubgYCAZ/Pt729DZcPExMTeB/YAv97fIpkkhik/8UBPANvaGgYGhqCTi82m00ul1+/fn1lZSVhycrk9aotFstBHRJWB/b5fDEvlL29vdTlRxQmSP+LA6vVqlKp4PX+/j4AoLKy0mg0KpXKoaGhv/mbv0lrNJ/PNz8/f/v2bbylrKyMy+UODQ3dvn3barVC295mswEA4HMrKioCgQBs8fl8gUDguJXKIyVo/684GB8fl0qlMDQ1EAjcv3+/oqKCz+f39/fDabysrAwq50GUlpbOzs7Ozs7CzoODgxwOB/9WqVT29/fD61u3buEtpaWlZ86cgTP/V199BVu2t7fv37+fs9+KyB/I/7fIcLvduTux39/fP3nyJLyWy+XQ5dbtdovF4sw87REFDpr/i4ycuuvgyg8ACAaDEonkww8/NBqNMV63CNKA5n/Egbjd7r29vcrKSnTUT1b+HyE1hq7EKz/6AAAAAElFTkSuQmCC)Fig 1: santa_barbara_map * Now create an Undirected Graph such as the romania_map, containing a dict of nodes as keys and neighbours as values.* Start exploring from Santa Barbara and try to find El Cajon in the map using.* Start exploring from Barstow and try to find El Cajon in the map.* Now show the visualisation of the map [Figure 1] from the task and see how different searching algorithms perform / how frontier expands in each of the following search algoriths:> 1) Breadth First Tree Search> 2) Depth First Tree Search> 3) Breadth First Search> 4) Depth First Graph Search> 5) Uniform Cost Search> 6) Depth Limited search> 7) Iterative Deepening Search* Repeat task 3
###Code
santa_barbara_map = UndirectedGraph(dict(
Barstow = dict(Riverside = 75, Santa_Barbara = 45),
El_Cajon = dict(San_Diego = 15),
Los_Angeles = dict(Malibu = 20,Riverside = 25, San_Diego = 100),
Malibu = dict(Los_Angeles = 20, Santa_Barbara = 45),
Palm_Springs = dict(Riverside = 75),
Riverside = dict(Barstow = 75, Los_Angeles = 25, Palm_Springs = 75, San_Diego = 90),
Santa_Barbara = dict(Barstow = 45, Malibu = 45,Los_Angeles = 30),
San_Diego = dict(El_Cajon = 15, Los_Angeles = 100,Riverside = 90)))
santa_barbara_map.locations = dict(
Barstow=(240,530), El_Cajon=(270,300), Los_Angeles=(120,420),
Malibu=(80,450), Palm_Springs=(280,450), Riverside=(200,420),
Santa_Barbara=(131,530), San_Diego=(210,300))
santa_barbara_locations = santa_barbara_map.locations
print(santa_barbara_locations)
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in santa_barbara_map.locations.keys()}
node_positions = santa_barbara_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in santa_barbara_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in santa_barbara_map.graph_dict.items() for k2, v2 in v.items()}
santa_barbara_graph_data = { 'graph_dict' : santa_barbara_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
show_map(santa_barbara_graph_data)
###Output
_____no_output_____
###Markdown
1. Breadth First Tree Search Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
2.Depth First Tree Search Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
3.Breadth First Graph Search Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
4. Depth First Graph Search Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
5. UNIFORM COST SEARCH Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
6. DEPTH LIMITED SEARCH Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
7. ITERATIVE DEEPENING SEARCH Santa_Barbara to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Santa_Barbara', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
Barstow to El_Cajon
###Code
all_node_colors = []
santa_barbara_problem = GraphProblem('Barstow', 'El_Cajon', santa_barbara_map)
display_visual(santa_barbara_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=santa_barbara_problem)
###Output
_____no_output_____
###Markdown
**Task 5** **[35%]** ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAUgAAAElCAIAAAAJHOzgAAAYiUlEQVR4nO2d25Lrqg5FnVP7/3855yHVFAtdLO4SzPGUdmyCDRNJINyf7/f7AADO4n+7KwAAGA+EDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHAiEDcCBQNgAHMh/uytwC5/Phx5kXzj3+XyUF9Hp3wLwA8JewU/VhSBZqb8CVQMLcMUXAUGClcBib+Mn9dxuU/EX3xaWX78W3AyEvR+qcOnbpO3iWwAKIOzNUK3av32E6B0AxNiLkPT5m+WWlKl/m75qm4cDBwOLLVIsLLEBrTHKZQ3vqxqNcn216uBCsCgqkgu7+PwIoa/+MNnwmB6k5b9+i0YEBRC2iJIKkr6SxA/AXhBjA3AgEHY1nZb58/kgJAazgbDr6NFkLmloG0wFwq5AX3xqKG1IOQBQoi532TdLTfrFYv4MKgWuCDmRK22WGnsvxWKSlMtpWXNaPwyBy4kqbHa/hKt7eU1oeaBtMI2orjhFyQbTjzxDBaYb8GL1G5neYBLnCDtHyaBOopKsaAPN2ychbzCJA4VdTGXNk82ojE6axAZAJ1GFbcz3nPG76fNYHRoTzgGwEFLY0mYp9jg14A3KGbvd4lclWhQ8czCKwO7f6/pTOi5NntGF6Nd9mgMrb9kKFrd1wF4CC3sgW9JdLOUj8AZthHTFx8J671tqQkHgDdqAsPcghdnsmU+3Z167kg9PIToQdonPDt2zJCZdJc1KIO/9ALC7q8Rzt/7Z+YYa0qsG5ucAh9xusUOknef0eOb0vhDAn8rVFlvxURfYsZ5f+W0Lt7/GVH9RMV7/cB73Cls3yyG2WNd65mxKDF7GeCS3u+IK9onrjYxNVluTYw8WcKmw7UtN/rX9mOWt2/Yi687/XQOFG9uvttdO7eUzXvwiJcY+5v9hAmFH57r2G7sUvKs+r2U+8KLv5qjJs3nTXSHm0hJpzjxQncFYvMfYRseSfmUprYoo8XYi0PsbpEoqre//pvbiXdg5SjNbum9/F5+h7dnjRdxtJDTR1ckt+KmJQiRhK69Meb12VGOEs9vPQe9vaLgFz1v3phI1xmazI9eElLHi7URVsppnam/hmzGpSg6JKuyC15TJGf7zwAKXEWJSTXrvhd7KltLu2foSyRVPSK3C6m2S5zzQJ6dFSbNE6bjUQY1ZN49jz/z1qUqtnJ9QnMw+3nSwuFbfu14cUVonf8L0s3IvQxqlUdjSmNpXmQqMv2WcVFOKZX3+35FJ8XZeZt4p8w+fv/e01a4LJDzLuzaz9TWjLjkpUsMV5Ut6pn/S1kmfc9Gy48XDjQWj2qLRFU9OUbODNJbmn05twN7Ca7EzfHKlwxUH+zuBw8C7CIlpbPxaW8UGWu6UjVYmxS/zhtRhMXZnFZWnVjxTqdlYj+u1Vq8qemWXMAauBnkIvPUKfDKeppayX5LG+vzaeaZL6r2djImx6eNWwmDpBPbGLEfYg5NCa5Z+n1wqgR2qpnavZ6YZ0X9dOV5VpaKP5YU0WPu82B7HUPlqxgPvFbb+IOg5+W3UWteqWjWURsMk+7Uz4u3i2RZ2tTm6VmiL9NgqNZTTRj7Y0Ui4cAToQytGtOKSYiSVYuPnX9m/Nk1RWyWYb6ZX2K/iVIyAq5kbafSxM1bbiv/C9pWBj1HSyev5K72kJ+s/yu9ajtPhQC/E/lsW77Jo01HPcNs69ry4peHR0LCqGLaNlRzlJ9P4mf5Q/68oUNtlueRZOIFKmywiuZEfey/T17GLvl6obnjE2KZqpYRlPjm9UPIAqcdb+1v2Kj01TyCNBUoNpUkWdlwofAfW617sJoyF9v/9rnjyvtIRtjmLBlOilH7PvKeNpUBROVlxyfrn0tgftbh2w6n1zKVCHkHMT9aXXuUaWsYsk26nXdh6hZQYhj24sbXsN2I5P50zvAtuf0QWebNGu8Evo5NSkrdymM5HcUiu+OOvjYdHGdsxBt7UOPfHw67mWUMwQNifjP7SmuvgsNVrtR1iLEjy1s9Jn6vuqBgO9Mt9NroTeoXtoSN6buAQWm2Amu7iz9yXzr+l0i0uSUYihTO/43DCqxg8K47HTZkRb3vgNfAutC19y57Jzq4DO+Nj7MU+eYjmP9VuPzbPvJYiJqeFh2j0vQx4QMoq5eynH6uBjbWNdVOJIS1O16h/H+CH1zLmASmrjs80eUdsXUudB96XMQnkEUbn5l9sK+HsgW8xY2JsJVXjqcz9OJv18baSBMIe7AxuU4tXlYDVrOHsydofUlTcfqBXfsatSU+e/a0hFbC3ddXPhW73lSx955k9gUlnXuuybir7u/qZOrrdHm7Vq1Q9Cqmt6TOEUGew4WWGnf757DFbki6dm1WmbS2/stj4bIlg2ajebdOfxOZtmzTPYS9VWn1NjXr9rTU3ni8dJdifnlQftgKQ6FT254pXyXvNmG1xEYeEFXvXt9lAY02M01YCxgI7Xt4r/uqfrxdA6kmzY1Fa/hZHvfizf9FrIFB1LcOErS+EGmdfpWmVIsF4dhvno8zrEn1/8uM8GUuNIgmvIfpli5Iux8LnMsYnqLAToQ/n+NGDtNhH8FfXOOSSq1xIcdT6kDR77IQqDSuFYM5sAeMTVAbGjeyc6rN2vF8mtvXut8IQDU/Cnk73vDmSB+MlxlagI8W81rKXnNdqVB1ybS/TuWcNP8Jgqus2HfQzUK5nvLBffdSGx61MQdd6B5a5buO3MzrNVD0717ARo6rzP8PdYz/j92Pnn4eoOjEk2B47EMxgiLbP0LDyHKombi403SOFTR9foXPp4K6Y2cKWgaBqnuIMDbehG4/cjN+m7SnLXY9tjjfEs941ELALaVdpWO8eypwZ69ZtTATawixXfE1vk+ZCNzJ8IPA5r/sa37JpbWvqBp6Bwm6YlBrltT478tJG8ZrLoZ9mQSlK9wiUuile7pCknbEDQdzu0UyA5S4Lh/law+9FmjRuO1NJW8jX6pprq6va6Cw49OZWcoiwaTcytvTrQelIIMaqejtG7y9cM40lvLAtUT07TZq+1XszvXYBMxwQ+wJE1VLF8OAZ0fgQwgub1UDPKpqyFtpcZgMzMtueGtlYzvRs2C9n/37sfhbMn32/3/WzdHm8mtNcmlHSxjM/E/Zvw1yP4gRhP2/buavEoFy7uM/lUUBOv8iH1I3989P3VhkwisCuOJ33KlZZqM7teTLFtYvX53UaFqgmUTyWb/YmHOWRKqV5eLxnEOxRXtX2DTdrnOSn5y+b/Fdu6qrGnU1gi302bb28No4oDm7UFVQ9Fgj7ZPw47WAxEHYA2jRJMz37y5wEzPVwIGyP0I7esNimS8Xh5D8YCIQdiXkp8XDaDwPCjkQuuddk+OffdT56UGKl0w5zPYnVwv7I+wSlnO3bGt6yrp5/lkQ76jH6cdqBHRcWO88tUZR/M+zupaqHMyrXfeDYAXM9j/3CzpOWHhgBM0kVFnmnM8dqyfNM++XsF7YCRnSJNkdm3txbKj//U3fa9YiDXkVdOfQNBdfCvhDaX9O2iiK6zjeBFNab5sZvyXVvC6kk9fpJkguBX2Ejuv4h9WA26pb+3C6D3FmoirkkywyLreNX2I+D7gj2koYD6onkp7Hb+IpyZlfVG/v3Y2MHb+JgK9TZvoozkt738CWvVfzR87tx2SDsPDhkI8MHOj+OawW2i9Wu+GtkqBwEofnWv8UN43sz+11x8ONgPzyRchZqLwG1uJ48A1dBZ1voyp+egYeF7gSEDbxQtc9U/8rVUt8W4Iq74BLzcslteiCwxZZmYtB7DsbV62I9E9hiS9kIF3LPE7h8ddpOYIv94+vvn7Z2ImVNKR7KmoqBQIQX9iNr2/gWh2KXhf5mkhn6l95wlv90Xr38fFdDEvBDYFfcAuu2Fe6csriaG8llHqAeYiA/D1g4wWI/gtFu2H4sbTxa/xII/RcLo724bsA/51hs6jan7QH9JdPCR6H70sXt0DpMrRuIyznCfka4qZYofQ3FkFSoV9ntBMATyBWXtukXf36zd6ex7xh5BCOZl8NuPnt26weWGdgJI2wWfWfY674x9kzj/rMhtM1pIyMavBJD2FeZKcn/p5NkX/X/UXsDY9BKYgi7YX47LtLC26sDAkAi6uRZHj8HpcqCQcCgihgW+8mmjnTvFADwBBI2Sz5R3ClvNkNbD3f7fxSAScQW9o9R8i6W0IoVsjxbm03bBsAPUWNsyi+jozn21jdRDZ+4wogApnKCxc7ptN4NF0KizSgb8tidMHjOds6x2Dmd1ptexU7aRZ+W90ae259HQ8iHb+BMYf9okDe7aUSfV2vzC2B8FIwZgUDhNFec0u+cS0mmMCBTwfDXw8kWO6fNOS9Optei5xkxqlSJcaDzKs632DmK9VYmaaQZnbZ+hg4qUYy8/Ztwb+YuYf9g5W1f0IIy58HGOHjgDQRzxQfuBmmeOUcnWwC73EX/BBI3WuycgUmpFmB8CvTXZhSnsV8BltuF/WOxvMEPOpex8i0XZxPMFZ9KZ1pLM8ovXuV5QsMDgbBL1stb6dDo66ANuOI8M5xzBNhgGRC2xt7Ym51YYlG2prFJNXTRHnstDgPCfmeXvOnaXi5XKt0PecVyoXPWZUgHoeeTgLCtzMg5nw277K8vKc2oJGKQ9WDyrI5dM+fNpPFIVxe2Rh4GhN3CbHkPLLbY0vx4+jdGYB5wxduxO+e6tdRf20AjZ6lMGlTn5RfHafhNLwRxCRn8OIzZdHlvr/Bes7z99i8ErvgYfMbeuR2GtK4CrvhIqHPOrkWtrI+0YAbOZoOw9VeIPUJ8GKhHuppeDvTcwED2uOLFPO1HeBnlh7y2MhDK3kMAZrPBYiv5j4+QU8EejIh0F7CrYCzbYuyGPKdYk6tFUuej3uxtgsfW99lsnjyjWi2OUHn4h/ba1x6srJPVXuIN6fUJsdo0HNuEzUbOdF4tX0AK0ZXZvJHm0mDkQRuOlruUN+OEGN0Xu5enGvlisVCZkcFKnoIXYVPTPdDuzcZVxHikkZdyaSWRg83r2NKGhKLBPPe8QFN6IYx8Cr+h1R52Lne9HpQ082t7o2liB3X9oBH/g46ReUa+WZzHPNtdeHHFWdratXDb0rw6PZj/0OuOZeVXes7p4TWH7+GGsKoq9Rj5PNHo9XcLBxvmuhPXwm6DTXdRcmC+fy8GMnYmV8bE6I8k2dSOX6+/S7E8RouzLTUZXQ19/bkLOVDYU3HVk6SpI2Xxf4ExtCxkNDxDacWkucCzOXzbJqvDNnGOsnXDKTyIXFcNjvQoeh4U/PB+ThY22z8aOk2StENVJ17va/1Uc/Po2XYtyDnWFR9lq31a6Ry6Je4h04G7cvjyaUvLk3T+qANxpsVmM1Wl9FXJlLn1vV9JYi4Orr8XzHXv4liL/QgLPMXB3JSNnW51OyhsrNVrAgIYxZnCtufA0OOBYjx2iUtZGXoc3Be0vYYzhd1MrD5nHKqeaPcF+jkzxm4gbkRtwdV91eYC5X8mXk++nKjCHjgTE2I16zB6mo9NgIGqC6IKexSQ9C7atM1O+APKvcKe6nsf7NUPBAtg87hU2DDUTujXNoYGlutmxZ2s+oAhwDOSuEvY6AcO6VzZzi02Ru3ELcJGk3vGru1iRyr2ZkucH2NjNWsXVUqTgm26dwVjtIXDLTZG8UBQu62/XMH+1YUcK+yN43reOwsrpLiOypn3gEzyUZwpbFedQxJ5z5kA6JwWY3tL+b5B1WMrjKyVIQQWNu0BbifJ7HHB9pkhZaPFsq0X0HY/gYVdLGC6MtQs9s66q1unkZHdaKGcP7wm0HYnUYWdWj3Eapa9entvRJ+RLpgtPGi7h6jCdot/x2Es0u7oIbjStp+aWAg5Kx7rEQfFMkIlj337cEa7xJr6bL9xiZDCdjWQ6+T1ZN9Y9PybUCWduZiqxzu1OSwr2/SEKN1jHtuE3TnUFQnD2yeTJeyZUn4q780KFQMH1XBVgtolVAibHQXbnuCQAZXO9HjrkRFhVxCVk6VvB7ZFT28pRgQ6QEgJf9I+E3pQ+q3mOg+hQthUPM3B1STnDQmJo5B6cO4ZfYVXsi+o26tn/pB/ZqaUQ8VcdHJaVEN0sJguV9xhrAttd1K1y2LLc7ZM6Sm5NJPq7C0YbBc2eyd0pLckOaTP7FVFCJ0+FzF2PqzuesoYU2bwaj/w2Ckt69jS0mWet1QoM1edkoOhXJUfL05glzocehOggTzcM/Yciakz596m5VssdjGdYB8s9fWe4szaFRf2IMby0LDxbUEa3KUOU/Qx1qmkKxT2ybMiINdruwyn69jpYRnlLQnY80oY0Om0GexxfZpA+qz8hH7JRpamlL6mB30M/5w9/9YiWrjlEYGr1Und4zMq8xE8H/by18kz9tpkzwvDrtzOgr6C7pjoeRR4jP3c9QSn9hh0x5y2p4G4aRROY+xJIOT2DEbGgVy3bRMht0+g6rHcZbETWAmbSu2z/fz7bwAeOVWJHnzgf3FcKuwHbrkbkqpz3RoPPg5SQXxynSueA7d8L0VWWTr+mnG4sI5RCWyxWSeNPa57bnDLtzDkmaPhJKIKm3XSHrmlpfAsfdvZReyXK0vuK0OD4gEq+ZWzf934LTRcRVRXXEnlo2nDuqrTOUq+8UAsldmIB1X3HAQ/ogr7Fdrqr/Zwb8jdmao1tjLzsNhqOlLTgz9gwyWiuuI57B6g3D+XTmZZE3InB0H/ISVFNy/nEbbN6ItG0s8pVxnLsRQunZD/ScdZOORGwgtbMlZKevlrmetXwl7HIGnweriRKN1p7dCmOCzs9LWdzjmI1wEOFMQW9rzxW5+KG/UTFqPdWf7TOic3cGgz3mOVsw27rRM4xqZOmvJnG8aQe9mMekTOvju3BBb2j88fyQCmye0i3m4zj1On09b0eDoXNe+qguKB2wuBp91JYFecqmKS2zYj5M4LfJ0Ye0jAqa+B28NUy2d61aM+inzSq3k5Gka+E7hJFUid0pW3ubcy+rT54inJmwlssdezZiUsKK/Ocx4uSR4Ba+1pydKSBy2QXn5J80HYdbBO5iV95TGk7hUns+FSYdV13bLKz4/k5xgXCG8Awq6G2pbiq730uxW6eqtqUpTJKpDm2BSF3KPGgUDYjTjscFX1GaXe4kJjsfS0V18A8XkVEPYhsMKYoV4d1l+QpvGp0WbBvEYDEHY7bFS5qzIF3uwbO2FWoAwH7GQ7FXzVAuHZXHfDY/ETY7NTULtqMuOn9YU0UBBA2MYB/rWQeXfqxCC0dX0lOcziUVvK7IGtnoen7ZwYrjj1vtryQyfRtr9CuvbVBVWq0dnvi5/28IR9Lj34J4Cw2YaM27q6eFj3hOZ1SPQnzxY/2lPaKOK29UYCCFuHJjCwn9uuogazP9KrFc/3L5/cicwsIEVvO7F3d33+XppTlRBWdVU6ky1nyF34BMoMTXiLPRujC9BcuLJ4s5H+MQtGey+xLfZskq2eYZzzMl1pIHk0uysC2jlZ2G1qVBaE6fJPW8UeTjwpa7L4vJiBKTex5gUOI5IrTrcKKJlG7OV0IsqYwMye1rOqxFZM+XOqQmiIId0jlpGj4MgDvAd9YVZaAIvYUq5CjKvAcwdzgba3cHKMDcC1QNhgLphC20KkyTPgkEnzi6ATWGzQTr5ol6/VFQdhtNcDYYMBRN+o8/mDHi8+RAHCBsOQ0gfW16QWdp8se0IUIGzQRcoa0g+G8MbnpQ+vB5NnoJ1v9v7Q598dpkHnz9it70W+HZtf1L+fdyyw2KALdruIdDCoMaQZvukG3e7nhcUGY4g1f6YkvbNGm6Ww4a5uFhYbdMFqQBKGB6Nd7KJjzzkg2IawQTvszu1Y27kbpsGLjfT0ZA8jAlxx0E4s9/uVQqWFf5EGLGlq0NV8IXbegNVs2e91zAK1EVhscCzSEpQHV3k2sNhgA/OMtrf15F3AYoPwQMwUWGywh06jDTHrwGKDMEDMdiBssA2LUCHmNiBs4A6IuR8IG7igSszSG5rzy/V3PB8PhA1c0Ck8dk/l6+sTDga54icjvfEnfTX2t14/54w1ocb3PdwDhH0yK01Wg1DTluYh5ppmetOD9wBh34if/87hpBrngRj7fCxvDqChqfLGMulPy6RX8StDyF9ynG6TPXgPsNjXIfVy6hLn7wCif74W+FqN2ksU2BcVKW8vOh5Y7Ct4NVyWd5vkNvCpF8y14e4WIOy7kLRdeLCU9K2TSeZ8mPhy//CcPXgPcMVvwRhgWwppexPQwPcH0aCADROk2OEGrptUuI3cauXm+vVd2YW5k/4sjlg+51eBSUDYABwIXHEADgTCBuBAIGwADgTCBuBAIGwADgTCBuBAIGwADgTCBuBAIGwADgTCBuBAIGwADgTCBuBAIGwADuT/2+a77LnsIrEAAAAASUVORK5CYII=)Fig 2: brest_map * Now create an Undirected Graph such as the romania_map, containing a dict of nodes as keys and neighbours as values.* Start exploring from Bordeaux and try to find Stasbourg in the map using.* Start exploring from Brest and try to find Nice in the map.* Now show the visualisation of the map [Figure 1] from the task and see how different searching algorithms perform / how frontier expands in each of the following search algoriths:> 1) Breadth First Tree Search> 2) Depth First Tree Search> 3) Breadth First Search> 4) Depth First Graph Search> 5) Uniform Cost Search> 6) Depth Limited search> 7) Iterative Deepening Search* Repeat task 3
###Code
brest_map = UndirectedGraph(dict(
Avignon = dict(Grenoble = 227, Lyon = 104, Montpellier = 121),
Bordeaux = dict(Limoges = 220,Nantes = 329,Toulouse = 253),
Brest = dict(Rennes = 244),
Caen = dict(Calais = 120,Paris = 241, Rennes = 176),
Calais = dict(Caen = 120,Nancy = 534, Paris = 297),
Dijon = dict(Nancy = 201, Paris = 313, Strasbourg = 335),
Grenoble = dict (Avignon = 227, Lyon = 104),
Limoges = dict(Bordeaux = 220,Lyon = 389,Nantes = 329,Paris = 396,Toulouse = 313),
Lyon = dict(Dijon = 192,Grenoble = 104,Limoges = 389 ),
Marseille = dict(Avignon = 99, Nice = 188),
Montpellier = dict(Avignon = 121,Toulouse = 240),
Nancy = dict(Calais = 534, Dijon = 201,Paris = 372, Strasbourg = 145),
Nantes = dict(Bordeaux = 329, Limoges = 329,Rennes = 107),
Nice = dict(Marseille = 188),
Paris = dict(Caen = 241,Calais = 297,Dijon = 313,Limoges = 396, Nancy = 372,Rennes = 348),
Rennes = dict(Brest = 244,Caen = 176,Nantes = 107, Paris = 348),
Strasbourg = dict(Dijon = 335, Nancy = 145),
Toulouse = dict(Bordeaux = 253,Limoges = 313,Montpellier = 240 )
))
brest_map.locations=dict(Calais=(240,530),Caen=(220,510),Nancy=(280,480),
Strasbourg=(300,500),Rennes=(200,480),Brest=(190,500),Paris=(250,470),Dijon=(280,450),
Lyon=(280,400),Nantes=(200,425),Limoges=(230,410),Bordeaux=(200,370),Grenoble=(300,370),
Avignon=(280,350),Montpellier=(250,350),Toulouse=(215,350),Marseille=(290,320),Nice=(320,330)
)
brest_locations = brest_map.locations
print(brest_locations)
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in brest_map.locations.keys()}
node_positions = brest_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in brest_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in brest_map.graph_dict.items() for k2, v2 in v.items()}
brest_graph_data = { 'graph_dict' : brest_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
show_map(brest_graph_data)
###Output
_____no_output_____
###Markdown
1. Breadth First Tree Search Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
2.Depth first tree search Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
3.Breadth First Graph Search Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
4. Depth First Graph Search Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
5. UNIFORM COST SEARCH Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
6. DEPTH LIMITED SEARCH Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
7. ITERATIVE DEEPENING SEARCH Bordeaux to Strasbourg
###Code
all_node_colors = []
brest_problem = GraphProblem('Bordeaux', 'Strasbourg', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=brest_problem)
###Output
_____no_output_____
###Markdown
Brest to Nice
###Code
all_node_colors = []
brest_problem = GraphProblem('Brest', 'Nice', brest_map)
display_visual(brest_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=brest_problem)
###Output
_____no_output_____ |
exams/Midterm_Exam_01_Solutions.ipynb | ###Markdown
**1**. (25 points)- Write a **recursive** function that returns the length of the hailstone sequence staring with a positive integer $n$. (15 points)The hailstone sequence is defined by the following rules:```- If n is 1, stop- If n is even, divide by 2 and repeat- If n is odd, multiply by 3 and add 1 and repeat```For example, the hailstone sequence starting with $n = 3$ has length 8:```- 3, 10, 5, 16, 8, 4, 2, 1```Use the `functools` package to avoid duplicate function calls.- Find the number that gives the longest sequence for starting numbers less than 100,000. Report the number and the length of the generated sequence. (10 points)
###Code
from functools import lru_cache
@lru_cache(None)
def hailstone(n, k=1):
"""Reprots length of hailstone (Collatz) sequence startign with n."""
if n == 1:
return k
else:
if n % 2 == 0:
return hailstone(n // 2, k+1)
else:
return hailstone(n*3 + 1, k+1)
best = [0, 0]
for i in range(1, 100000):
s = hailstone(i)
if s > best[1]:
best = (i, s)
best
###Output
_____no_output_____
###Markdown
An alternative solution.
###Code
@lru_cache(None)
def hailstone_alt(n):
"""Reprots length of hailstone (Collatz) sequence startign with n."""
if n == 1:
return 1
else:
if n % 2 == 0:
return 1 + hailstone_alt(n // 2)
else:
return 1 + hailstone_alt(n*3 + 1)
hailstone_alt(3)
best = [0, 0]
for i in range(1, 100000):
s = hailstone_alt(i)
if s > best[1]:
best = (i, s)
best
###Output
_____no_output_____
###Markdown
**2**. (25 points)- Create a `pnadas` DataFrame called `df` from the data set at https://bit.ly/2ksKr8f, taking care to only read in the `time` and `value` columns. (5 points)- Fill all rows with missing values with the value from the last non-missing value (i.e. forward fill) (5 points)- Convert to a `pandas` Series `s` using `time` as the index (5 points)- Create a new series `s1` with the rolling average using a shifting window of size 7 and a minimum period of 1 (5 points)- Report the `time` and value for the largest rolling average (5 points)
###Code
import pandas as pd
df = pd.read_csv('https://bit.ly/2ksKr8f', usecols=['time', 'value'])
df = df.fillna(method='ffill')
df.head()
###Output
_____no_output_____
###Markdown
Note: The pd.Series constructor has quite unintuitive behavior when the `index` argument is provided. See `DataFrame_to_Series.ipynb` for this.
###Code
s = pd.Series(data=df['value'])
s.index = df['time']
s.head()
s1 = s.rolling(7, 1).mean()
s1.head()
s1.sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
**3**. (25 points)- Get information in JSON format about startship 23 from the Star Wars API https://swapi.co/api using the `requests` package (5 points)- Report the time interval between `created` and `edited` in minutes using the `pendulum` package (5 points)- Replace the URL values stored at the `films` key with the titles of the actual films (5 points)- Save the new JSON (with film titles and not URLs) to a file `ship.json` (5 points)- Read in the JSON file you have just saved as a Python dictionary (5 points)
###Code
import requests
url = 'https://swapi.co/api/starships/23'
ship = requests.get(url).json()
ship
import pendulum
created = pendulum.parse(ship['created'])
edited = pendulum.parse(ship['edited'])
(edited - created).in_minutes()
films = [requests.get(film).json()['title'] for film in ship['films']]
films
ship['films'] = films
import json
with open('ship.json', 'w') as f:
json.dump(ship, f)
with open('ship.json') as f:
ship = json.load(f)
ship
###Output
_____no_output_____
###Markdown
**4**. (25 points)Use SQL to answer the following questions using the SQLite3 database `anemia.db`:- Count the number of male and female patients (5 points)- Find the average age of male and female patients (as of right now) (5 points)- Show the sex, hb and name of patients with severe anemia ordered by severity. Severe anemia is defined as - Hb < 7 if female - Hb < 8 if male (15 points)You many assume `pid` is the PRIMARY KEY in the patients table and the FOREIGN KEY in the labs table. Note: Hb is short for hemoglobin levels.Hint: In SQLite3, you can use `DATE('now')` to get today's date.
###Code
%load_ext sql
%sql sqlite:///anemia.db
%%sql
SELECT * FROM sqlite_master WHERe type='table'
%%sql
SELECT * FROM patients LIMIT 3
%%sql
SELECT * FROM labs LIMIT 3
%%sql
SELECT sex, COUNT(sex)
FROM patients
GROUP BY sex
%%sql
SELECT date('now')
%%sql
SELECT sex, round(AVG(date('now') - birthday), 1)
FROM patients
GROUP BY sex
%%sql
SELECT sex, hb, name
FROM patients, labs
WHERE patients.pid = labs.pid AND
((sex = 'M' AND hb < 8) OR (sex = 'F' AND hb < 7))
ORDER BY hb
###Output
* sqlite:///anemia.db
Done.
|
docs/source/examples/fbanks_mel_example.ipynb | ###Markdown
Mel fbanks examples Install spafe
###Code
%pip install spafe
###Output
Requirement already satisfied: spafe in /home/am/anaconda3/lib/python3.7/site-packages (0.1.2)
Requirement already satisfied: numpy>=1.17.2 in /home/am/anaconda3/lib/python3.7/site-packages (from spafe) (1.21.5)
Requirement already satisfied: scipy>=1.3.1 in /home/am/anaconda3/lib/python3.7/site-packages (from spafe) (1.4.1)
Note: you may need to restart the kernel to use updated packages.
###Markdown
Constant Mel fbanks and inverse Mel fbanks
###Code
from spafe.utils import vis
from spafe.fbanks import mel_fbanks
# init vars
nfilts = 48
nfft = 512
fs = 16000
low_freq = 0
high_freq = 8000
scale = "constant"
# compute the mel filter banks
mel_filbanks = mel_fbanks.mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(mel_filbanks, "Amplitude", "Frequency (Hz)")
# compute the inverse mel filter banks
imel_filbanks = mel_fbanks.inverse_mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(imel_filbanks, "Amplitude", "Frequency (Hz)")
###Output
_____no_output_____
###Markdown
Ascendant Mel fbanks and inverse Mel fbanks
###Code
from spafe.utils import vis
from spafe.fbanks import mel_fbanks
# init vars
nfilts = 48
nfft = 512
fs = 16000
low_freq = 0
high_freq = 8000
scale = "ascendant"
# compute the mel filter banks
mel_filbanks = mel_fbanks.mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(mel_filbanks, "Amplitude", "Frequency (Hz)")
# compute the inverse mel filter banks
imel_filbanks = mel_fbanks.inverse_mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(imel_filbanks, "Amplitude", "Frequency (Hz)")
###Output
_____no_output_____
###Markdown
Descendant Mel fbanks and inverse Mel fbanks
###Code
from spafe.utils import vis
from spafe.fbanks import mel_fbanks
# init vars
nfilts = 48
nfft = 512
fs = 16000
low_freq = 0
high_freq = 8000
scale = "descendant"
# compute the mel filter banks
mel_filbanks = mel_fbanks.mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(mel_filbanks, "Amplitude", "Frequency (Hz)")
# compute the inverse mel filter banks
imel_filbanks = mel_fbanks.mel_filter_banks(nfilts=nfilts,
nfft=nfft,
fs=fs,
low_freq=low_freq,
high_freq=high_freq,
scale=scale)
# plot filter banks
vis.visualize_fbanks(imel_filbanks, "Amplitude", "Frequency (Hz)")
###Output
_____no_output_____ |
exercises.ipynb | ###Markdown
Settings --- Caution 🚧 아래 내용을 수정하면 프로그램이 정상적으로 동작하지 않을 수 있습니다.
###Code
from os.path import join
url = "https://gist.githubusercontent.com/pparkddo/bc96cb657e95a11eeb783ab2cb60a798/raw/97d9518f8e98b1606bfd00fa5a371071b25d44e8/"
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Problems---각 문제를 풀기전 입력값이 있는 셀을 꼭! 실행시켜주셔야 합니다. 1. `array` `a` 의 모든 원소의 합을 구하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
a = pd.read_csv(join(url, "excercise_random_numbers.csv"), header=None).squeeze().to_numpy()
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
2. `array` `a` 의 `dtype` 의 이름(`name`)을 출력하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
a = pd.read_csv(join(url, "excercise_random_numbers.csv"), header=None).squeeze().to_numpy()
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
3. `array` `a` 에 있는 값들 중 가장 큰 값을 구하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
a = pd.read_csv(join(url, "excercise_random_numbers.csv"), header=None).squeeze().to_numpy()
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
4. 경로 `path` 에 있는 `csv` 파일을 읽어서 `df` 라는 변수에 할당하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
path = join(url, "excercise_records.csv")
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
5. `DataFrame` `df` 의 `[student_id, answer]` 열만 선택하여 `df` 라는 변수에 할당하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_records.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
6. `DataFrame` `df` 중 `age` 열 값이 `40` 이상인 행들만 선택하여 `df` 라는 변수에 할당하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_information.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
7. `DataFrame` `df` 의 `favorite_number`열 값에 `2`를 곱한 다음 `-1` 을 한 값을 구하고, `df` 의 `modified` 이라는 이름으로 열을 추가하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_information.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
8. `DataFrame` `df` 의 `department` 별 `age` 값의 평균을 구하시오.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_information.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
9. `DataFrame` `df`를 `timestamp` 열 기준으로 오름차순 정렬하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_records.csv"), parse_dates=["timestamp"])
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
10. `DataFrame` `df` 의 `student_id` 을 행으로, 각 `answer` 들을 열로 가지며, 값으로는 `timestamp` 의 개수(`count`)를 가지는 `DataFrame` 을 만드세요. (아래 예시로 나오는 `DataFrame` 과 동일하게 만들면 됩니다.)
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_records.csv"))
# 문제의 출력 예시입니다. 'df' 를 조작하여 아래와 동일한 DataFrame 을 생성하면 됩니다.
pd.read_csv(join(url, "excercise_records_pivoted.csv")).set_index("student_id")
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
11. `DataFrame` `records` 에 각 학생에 대한 정보를 추가하려 합니다. `records` 와 `information` 의 `student_id` 를 `key`로 하여 하나의 테이블로 합치세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
records = pd.read_csv(join(url, "excercise_records.csv"))
information = pd.read_csv(join(url, "excercise_information.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
12. `DataFrame` `df` 의 `timestamp` 열을 `str` `dtype` 에서 `datetime` `dtype` 으로 변경하세요.
###Code
# 문제의 입력값입니다. 수정하면 정답이 달라질 수 있습니다.
df = pd.read_csv(join(url, "excercise_records.csv"))
# 아래에 정답코드를 작성해주세요
###Output
_____no_output_____
###Markdown
https://api.open.fec.gov/developers/Every API works differently.Let's find the committee ID for our Congressional representative.C00373001 https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20 `requests` libraryFirst, we install. That's like buying it.
###Code
!pip install requests
###Output
_____no_output_____
###Markdown
Then, we import. That's like getting it out of the cupboard.
###Code
import requests
###Output
_____no_output_____
###Markdown
Oakwood High School
###Code
response = requests.get('http://ohs.oakwoodschools.org/pages/Oakwood_High_School')
response.ok
response.status_code
print(response.text)
###Output
_____no_output_____
###Markdown
We have backed our semi up to the front door.OK, back to checking out politicians.
###Code
url = 'https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20'
response = requests.get(url)
response.ok
response.status_code
response.json()
response.json()['results']
results = response.json()['results']
results[0]['cycle']
results[0]['disbursements']
for result in results:
print(result['cycle'])
for result in results:
year = result['cycle']
spent = result['disbursements']
print('year: {}\t spent:{}'.format(year, spent))
###Output
_____no_output_____
###Markdown
[Pandas](http://pandas.pydata.org/)
###Code
!pip install pandas
import pandas as pd
data = pd.DataFrame(response.json()['results'])
data
data = data.set_index('cycle')
data
data['disbursements']
data[data['disbursements'] < 1000000 ]
###Output
_____no_output_____
###Markdown
[Bokeh](http://bokeh.pydata.org/en/latest/)
###Code
!pip install bokeh
from bokeh.charts import Bar, show, output_notebook
by_year = Bar(data, values='disbursements')
output_notebook()
show(by_year)
###Output
_____no_output_____
###Markdown
Playtime[so many options](http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html)- Which column to map?- Colors or styles?- Scatter- Better y-axis label?- Some other candidate committee? - Portman C00458463, Brown C00264697- Filter it Where's it coming from?https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016
###Code
url = 'https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016'
response = requests.get(url)
results = response.json()['results']
data = pd.DataFrame(results)
data
data = data.set_index('state')
by_state = Bar(data, values='total')
show(by_state)
###Output
_____no_output_____
###Markdown
Tax Revenues (Income!) in BarcelonaOpen Data Barcelona provides lots of fun data about our city.You can access it here: https://opendata-ajuntament.barcelona.catWe will be examining average tax returns per neighborhood ("barri") in the years 2016 and 2015. Tax revenues are, naturally, a proxy for income, so we're really looking at home (taxable) income varies across the city.The columns are in Catalan, so here's a quick explanation in English: Any = YearCodi_Districte = District CodeNom_Districte = District NameCodi_Barri = Neighborhood CodeNom_Barri = Neighborhood NameSeccio_Censal = Cenus Tract NumberImport_Euros = Tax Revenue (average over all individuals in the census tract)
###Code
# Let's begin by reading the file "2016_renda.csv"
# into a DataFrame:
import pandas as pd
df = pd.read_csv('2016_renda.csv')
#
# 1)
# Get the (5) barris with the highest average tax revenues
# (i.e. average over the census tracts in each barri)
df.groupby('Nom_Barri') \
.mean() \
.reset_index() \
.sort_values('Import_Euros', ascending=False) \
[:5] \
[['Nom_Barri', 'Import_Euros']]
#
# 2)
# Get the difference in mean revenue between the
# poorest census tract and the richest, within
# each district.
#
# You should return a DataFrame with 2 columns:
# The district name and the difference in reveneue.
def get_inequality(df):
return df.Import_Euros.max() - df.Import_Euros.min()
df.groupby('Nom_Districte') \
.apply(get_inequality) \
.sort_values() \
.reset_index(name='gap')
###Output
_____no_output_____
###Markdown
Planning Your AttackOne pattern to make your code more legible, and to make it easier to break down big problems, is to ensure that your code can be read on two levels: one "declarative" level, where someone can read (or write) *what* will happen and another "imperative level", where someone can read (or write!) *how* the thing is happening.Data preparation often involves a "pipeline", a uni-directional flow of transformations where the data is moved, one step at a time, towards the final format.It's important, when you try to create a pipline, which can be a big problem, to make a plan.One way to make a plan is to start from the final goal, and ask write out the following statement: 1. "If I had ________ (INPUT), then it would be easy to make [FINAL GOAL], I would just need to ________ (step)."Where you should think of INPUT as "data ______ in data structure ______".That will be the final step of your pipeline. Now repeat the statement, with the FINAL GOAL being replaced with the INPUT of the previous step: 2. "If I had ________ (INPUT), then it would be easy to make [PREVIOUS INPUT], I would just need to ________ (step)."Let's see an example of this method of planning by working out an exercise:
###Code
#
# Your goal will be the following:
#
# We want to understand the income variation
# (or "spatial inequality") within each "barri".
# However, each barri is a different size.
# Larger barris will naturally have a greater
# variation, even if there isn't great variation
# between one block and the next, which is what
# we want to understand with spatial inequality.
# To deal with this, we will apply a naive solution
# of simply using the number of census tracts as
# a proxy for "physical size" of the barri. We
# will then divide the income gap (difference between
# lowest and highest income tract) within each barri
# by the number of tracts as a way to "control for size".
# This will be our measure of "spatial inequality".
#
# Your job is to return a dataframe sorted by
# spatial inequality, with any barri with one
# tract (0 inequality) removed.
#
#
# We will try to lay out a plan to solve the problem
# at hand with the process we just went over:
# 1. If I had a <<an extra column on the dataframe of
# the income gap divided by the number of tracts>>
# then it would be easy to <<get the barris with
# highest and lowest normalized income gap>>, I
# would just need to <<sort the dataframe by that
# column>>>.
#
# 2. If I had << A. a column for the income gap and
# B. a column for the number of tracts in a barri>>
# then it would be easy to make << an extra column on the
# dataframe of the income gap divided by the number of tracts>>
# I would just need to <<divide one column by the other>>.
#
#3b. If I had <<the raw data>>, then it would be easy to make
# <<a column with the number of tracts>>, I would just need
# to <<count the number of tracts per barri>>.
#
#3a. If I had <<the raw data>>, then it would be easy to make
# <<a column with the income gap>>, I would just need to
# <<calculate the income difference between tracts in each
# barri>>.
#
# Now we can use this outline to write a declarative pipeline
# function (in the opposite order of the steps we wrote):
def spatial_inequality_in_barcelona(df):
df = add_income_diff_for_barris(df)
df = add_num_tracts_per_barri(df)
df = add_inequality(df)
return inequality_by_barri(df)
# In the next exercises, you will write each of those functions,
# and in the end, use this function to compare barris based on
# their spatial inequality.
#
# 3)
# Write the function: "add_income_diff_for_barris"
#
# HINT: Make sure the returned dataframe is the
# same size as the original!
#
def add_diff(df):
gap = get_inequality(df)
return df.assign(gap=gap)
def add_income_diff_for_barris(df):
return df.groupby('Nom_Barri') \
.apply(add_diff) \
.reset_index(drop=True)
df = add_income_diff_for_barris(df)
df
#
# 4)
# Create the function: "add_num_tracts_per_barri"
def add_num_tracts_per_barri(df):
return df.groupby('Nom_Barri') \
.apply(lambda df: df.assign(num_tracts = df.shape[0])) \
.reset_index(drop=True)
df = add_num_tracts_per_barri(df)
df
#
# 5)
# Create the function: "add_inequality"
def add_inequality(df):
return df.groupby('Nom_Barri') \
.apply(lambda df: df.assign(inequality = df.gap/df.num_tracts)) \
.reset_index(drop=True)
df = add_inequality(df)
df
#
# 6)
# Add the function "inequality_by_barri"
#
# Note that this function should probably
# make sure that the dataframe has the
# same number of rows as number of barris
# (i.e. one barri per row).
#
# Also note that some barris have an inequality
# of 0, let's go ahead and remove them!
def inequality_by_barri(df):
return df.drop_duplicates('Nom_Barri') \
.drop(columns = ['Seccio_Censal']) \
.sort_values('inequality') \
.pipe(lambda df: df[df.inequality != 0])
inequality_by_barri(df)
#
# 7)
# Try out the function we wrote out in the planning
# phase, spatial_inequality_in_barcelona,
# does it work when given the raw data?
#
# Now let's go ahead and "refactor"
# "Refactoring" means rewriting the code without
# changing the functionality. What we wrote works,
# and is great and legible.
#
# But maybe breaking it down into so many separate
# steps, while didactic, could be considered overkill
# and maybe isn't the most efficient. You probably
# grouped by "Nom_Barri" at least 3 separate times!
#
# Try to rewrite the function spatial_inequality_in_barcelona
# to be more efficient (to only groupby Nom_Barri once!)
# and a bit shorter.
def add_inequality(df):
gap = df.Import_Euros.max() - df.Import_Euros.min()
sections = df.shape[0]
return df.assign(gap=gap,
sections=sections,
inequality=gap/sections)
def spatial_inequality_in_barcelona(df):
return df.groupby('Nom_Barri') \
.apply(add_inequality) \
.reset_index(drop=True) \
.sort_values('inequality') \
.pipe(lambda df: df[df.gap != 0]) \
[['Nom_Barri', 'gap', 'sections', 'inequality']]
spatial_inequality_in_barcelona(df)
# Open Data Barcelona provides the tax data for years
# 2015 and 2016 in different csv's. Read in the tax data
# for year 2015 so we can see how incomes have changed
# between the years.
#
# 8)
# Get the growth of the mean tax reveneue per census
# tract. Create a DataFrame that has the district, barri,
# and census tract as well as the difference in revenue
# between the years for each tract.
#
# Sort by the difference per tract.
def get_growth(df):
growth = df.sort_values('Any').Import_Euros.diff().iloc[-1]
df['growth'] = growth
return df
both = pd.concat([df, pd.read_csv('2015_renda.csv')]).reset_index(drop=True)
both = both.groupby(['Nom_Barri', 'Seccio_Censal']) \
.apply(get_growth) \
.sort_values('growth')
both
#
# 9)
# Get the mean growth per barri.
# Sort by mean growth.
both.groupby('Nom_Barri').mean().sort_values('growth')
###Output
_____no_output_____
###Markdown
Defining an exerciseExercises can be instantiated from the `Exercise` class.An exercise is instantiated by passing a markdown string with the exercise content being displayed to the learner.This markdown string can contain latex, which must be wrapped in dollar signs ($).The rendered exercise content can be seen by calling the `display` method on an exercise instance.
###Code
m = "What is $1 + 1$?"
e = Exercise(m)
e.add_answer(expression=2, correct=True, feedback="Indeed, $1 + 1 = 2$")
e.add_answer(expression=0, correct=False, feedback="Hmmm, did you compute $1 - 1 = 0$ instead?")
e.add_default_feedback("Please revisit the definition of natural numbers and the ($+$) operator")
e.write("integer_add_static")
e.play()
###Output
_____no_output_____
###Markdown
Adding answer rulesAnswers can be added using the `add_answer` method. User-answers can be simulated for testing purposes by using the `evaluate_answer` method.
###Code
e.add_answer(2, True, "That's right! $1 + 1 = 2$")
e.evaluate_answer(2)
e.add_answer(0, False, "Unfortunately that's not right, did you compute $1 - 1 = 0$ instead?")
e.evaluate_answer(0)
###Output
_____no_output_____
###Markdown
Default feedback can be added, shown to the user when no answer rule matches, defaulting to "Incorrect" if not specified.
###Code
e.add_default_feedback("Please check the definition of natural numbers and the ($+$) operator")
e.evaluate_answer(3)
###Output
_____no_output_____
###Markdown
Before an exercise can be published, it should be written.
###Code
e.write()
e.publish()
# print(json.dumps(e.data, indent=2))
###Output
Published succesfully, preview at: https://www.mscthesis.nl/preview?id=b3fc9b31-0910-4215-945c-51c823b17f6c
###Markdown
Parameterizing an exerciseAn exercise can contain parameters by using the `@param` notation in markdown templates. A dict containing SymPy objects should then be passed to a MarkdownBlock to replace the parameters with LaTeX code generated by the MarkdownBlock.
###Code
m = r"What is $@a + @b$?"
params = {}
params["a"] = np.random.randint(10)
params["b"] = np.random.randint(10)
e = Exercise(MarkdownBlock(m, params))
e.add_answer(params["a"] + params["b"], True, "That's right!")
e.play()
###Output
_____no_output_____
###Markdown
Exercises with matrices Vector addition
###Code
sp.latex(sp.Matrix([1,1,2]))
m = "What is $@v1 + @v2$?"
params = {}
params["v1"] = sp.randMatrix(r=4, c=1, min=0, max=10)
params["v2"] = sp.randMatrix(r=4, c=1, min=0, max=10)
params["ans"] = params["v1"] + params["v2"]
e = Exercise(MarkdownBlock(m, params))
e.add_answer(params["ans"], True, "Correct!")
e.play()
###Output
_____no_output_____
###Markdown
Matrix multiplication
###Code
s = "What is $@a @b$?"
rows = np.random.randint(1, 4)
columns = np.random.randint(1, 4)
params = {}
params["a"] = sp.randMatrix(r=rows, c=columns, min=0, max=10)
params["b"] = sp.randMatrix(r=columns, c=rows+2, min=0, max=10)
ans = params["a"] * params["b"]
e = Exercise(MarkdownBlock(s, params))
e.add_answer(ans, correct=True, feedback="Yes!")
e.play()
digits = load_digits()
sorted_indices = np.argsort(digits.target)
nums = digits.images[sorted_indices]
def save_image_for(matrix, filename):
fig, ax = plt.subplots()
ax.xaxis.set_label_position('top')
ax.set_xticklabels([i for i in range(0, 9)])
ax.yaxis.set_label_position('left')
ax.set_yticklabels([i for i in range(0, 9)])
# Minor ticks
ax.set_xticks(np.arange(-.5, 10, 1), minor=True)
ax.set_yticks(np.arange(-.5, 10, 1), minor=True)
ax.grid(which='minor', color='black', linestyle='-', linewidth=2)
ax.matshow(matrix, cmap='binary')
plt.savefig(filename, dpi=300, bbox_inches='tight')
def to_binary(m):
return np.where(m > 7, 1, 0)
zero_1 = nums[0]
zero_1 = to_binary(zero_1)
zero_2 = nums[1]
zero_2 = to_binary(zero_2)
# save_image_for(zero_1, "zero_1")
# save_image_for(zero_2, "zero_2")
# save_image_for(np.abs(zero_1 - zero_2), "diff")
t = r"""
<div style="display: flex; align-items: center; justify-content: center; margin-bottom: 10px;">
$A = $<img src="zero_1.png" width="150"/>
$B = $<img src="zero_2.png" width="150"/>
$D = $<img src="diff.png" width="150"/>
</div>
$A = @z1, B = @z2, D = |A - B| = @d, \sum D = @s$
"""
z1 = sp.Matrix(zero_1)
z2 = sp.Matrix(zero_2)
params = {}
params["z1"] = z1
params["z2"] = z2
distance_matrix = np.abs(z1 - z2)
d = sp.Matrix(distance_matrix)
params["d"] = d
params["s"] = np.sum(distance_matrix)
e = Exercise(MarkdownBlock(t, params))
e.display()
e.write()
# e.publish()
###Output
_____no_output_____
###Markdown
Matrix indexing Tasks:- Create exercise testing vector indexing- Create exercise testing vector inner-product- Create exercise testing matrix multiplication
###Code
m = r"""
Consider the matrix $A = @a$
What is $a_{@i,@j}$?
"""
a = np.arange(25)
np.random.shuffle(a)
a = a.reshape((5, 5))
params = {}
params["a"] = sp.Matrix(a)
params["i"] = sp.simplify(np.random.randint(6))
params["j"] = sp.simplify(np.random.randint(6))
e = Exercise(MarkdownBlock(m, params))
e.display()
e.html
e.write()
e.publish()
m = r"""
<figure>
<img src='o_grid.png' alt='missing' style="height:200px;"/>
<figcaption>Caption goes here</figcaption>
</figure>
"""
def generator():
e = Exercise(m)
return e
Exercise.write_multiple(generator, 1, "caption")
m = r"""
<div class="d-flex flex-justify-between">
<div class="d-flex flex-column flex-items-center">
<img src='o_grid.png' alt='missing' style="height:200px; object-fit: contain;"/>
<div>Cap</div>
</div>
<div class="d-flex flex-column flex-items-center">
<img src='o_grid.png' alt='missing' style="height:200px; object-fit: contain;"/>
<div>Cap</div>
</div>
</div>
"""
p = Exercise(m)
p.write("caption")
###Output
_____no_output_____
###Markdown
PAISS Practical Deep-RL by Criteo Research
###Code
%pylab inline
from utils import RLEnvironment, RLDebugger
import random
from keras.optimizers import Adam, RMSprop, SGD
from keras.layers import Dense, Conv2D, Flatten, Input, Reshape, Lambda, Add, RepeatVector
from keras.models import Sequential, Model
from keras import backend as K
env = RLEnvironment()
print(env.observation_space, env.action_space)
###Output
_____no_output_____
###Markdown
Random agent
###Code
class RandomAgent:
"""The world's simplest agent!"""
def __init__(self, action_space):
self.action_space = action_space
def get_action(self, state):
return self.action_space.sample()
###Output
_____no_output_____
###Markdown
Play loopNote that this Gym environment is considered as solved as soon as you find a policy which scores 200 on average.
###Code
env.run(RandomAgent(env.action_space), episodes=20, display_policy=True)
###Output
_____no_output_____
###Markdown
DQN Agent - OnlineHere is a keras code for training a simple DQN. It is presented first for the sake of clarity. Nevertheless, the trained network is immediatly used to collect the new training data, unless you are lucky you won't be able to find a way to solve the task. Just replace the `???` by some parameters which seems reasonnable to you ($\gamma>1$ is not reasonnable and big steps are prone to numerical instability) and watch the failure of the policy training.
###Code
class DQNAgent(RLDebugger):
def __init__(self, observation_space, action_space):
RLDebugger.__init__(self)
# get size of state and action
self.state_size = observation_space.shape[0]
self.action_size = action_space.n
# hyper parameters
self.learning_rate = ??? # recommended value range: [1e-3, 1e-1]
self.model = self.build_model()
self.target_model = self.model
# approximate Q function using Neural Network
# state is input and Q Value of each action is output of network
def build_model(self, trainable=True):
model = Sequential()
# try adding neurons. Recommended value range [10, 256]
# try adding layers. Recommended value range [1, 4]
model.add(Dense(units=???, input_dim=self.state_size, activation=???, trainable=trainable))
model.add(Dense(units=self.action_size, activation=???, trainable=trainable))
# usual activations: 'linear', 'relu', 'tanh', 'sigmoid'
model.compile(loss=???, optimizer=Adam(lr=self.learning_rate))
# usual losses: 'mse', 'logcosh', 'mean_absolute_error'
model.summary() # Display summary of the network.
# Check that your network contains a "reasonable" number of parameters (a few hundrers)
return model
# get action from model using greedy policy.
def get_action(self, state):
q_value = self.model.predict(state)
best_action = np.argmax(q_value[0]) #The [0] is because keras outputs a set of predictions of size 1
return best_action
# train the target network on the selected action and transition
def train_model(self, action, state, next_state, reward, done):
target = self.model.predict(state)
# We use our internal model in order to estimate the V value of the next state
target_val = self.target_model.predict(next_state)
# Q Learning: target values should respect the Bellman's optimality principle
if done: #We are on a terminal state
target[0][action] = reward
else:
target[0][action] = reward + self.gamma * (np.amax(target_val))
# and do the model fit!
loss = self.model.fit(state, target, verbose=0).history['loss'][0]
self.record(action, state, target, target_val, loss, reward)
agent = DQNAgent(env.observation_space, env.action_space)
env.run(agent, episodes=500)
agent.plot_loss()
###Output
_____no_output_____
###Markdown
Let's try with a fixed initial position
###Code
agent = DQNAgent(env.observation_space, env.action_space)
env.run(agent, episodes=300, seed=0)
agent.plot_loss()
###Output
_____no_output_____
###Markdown
DQN Agent with ExplorationThis is our first agent which is going to solve the task. It will typically require to run a few hundred of episodes to collect the data. The difference with the previous agent is that you are going to add an exploration mechanism in order to take care of the data collection for the training. We advise to use an $\varepsilon_n$-greedy, meaning that the value of $\varepsilon$ is going to decay over time. Several kind of decays can be found in the litterature, a simple one is to use a mutiplicative update of $\varepsilon$ by a constant smaller than 1 as long as $\varepsilon$ is smaller than a small minimal rate (typically in the range 1%-5%).You need to:* Code your exploration (area are tagged in the code by some TODOs).* Tune the hyperparameters (including the ones from the previous section) in order to solve the task. This may be not so easy and will likely require more than 500 episodes and a final small value of epsilon. Next sessions will be about techniques to increase sample efficiency (i.e require less episodes).
###Code
class DQNAgentWithExploration(DQNAgent):
def __init__(self, observation_space, action_space):
super(DQNAgentWithExploration, self).__init__(observation_space, action_space)
# exploration schedule parameters
self.t = 0
self.epsilon = ??? # Designs the probability of taking a random action.
# Should be in range [0,1]. The closer to 0 the greedier.
# Hint: start close to 1 (exploration) and end close to zero (exploitation).
# decay epsilon
def update_epsilon(self):
# TODO write the code for your decay
self.t += 1
self.epsilon = ???
# get action from model using greedy policy
def get_action(self, state):
# exploration
if random.random() < self.epsilon:
return random.randrange(self.action_size)
q_value = self.model.predict(state)
return np.argmax(q_value[0])
agent = DQNAgentWithExploration(env.observation_space, env.action_space)
env.run(agent, episodes=500, print_delay=50, seed=0)
agent.plot_state()
###Output
_____no_output_____
###Markdown
DQN Agent with Exploration and Experience ReplayWe are now going to save some samples in a limited memory in order to build minibatches during the training. The exploration policy remains the same than in the previous section. Storage is already coded you just need to modify the tagged section which is about building the mini-batch sent to the optimizer.
###Code
from collections import deque
class DQNAgentWithExplorationAndReplay(DQNAgentWithExploration):
def __init__(self, observation_space, action_space):
super(DQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
self.batch_size = ??? # Recommended value range [10, 1000]
# create replay memory using deque
self.memory = deque(maxlen=???) # Recommended value range [10, 20000]
def create_minibatch(self):
# pick samples randomly from replay memory (using batch_size)
batch_size = min(self.batch_size, len(self.memory))
samples = random.sample(self.memory, batch_size)
states = np.array([_[0][0] for _ in samples])
actions = np.array([_[1] for _ in samples])
rewards = np.array([_[2] for _ in samples])
next_states = np.array([_[3][0] for _ in samples])
dones = np.array([_[4] for _ in samples])
return (states, actions, rewards, next_states, dones)
def train_model(self, action, state, next_state, reward, done):
# save sample <s,a,r,s'> to the replay memory
self.memory.append((state, action, reward, next_state, done))
if len(self.memory) >= self.batch_size:
states, actions, rewards, next_states, dones = self.create_minibatch()
targets = self.model.predict(states)
target_values = self.target_model.predict(next_states)
for i in range(self.batch_size):
# Approx Q Learning
if dones[i]:
targets[i][actions[i]] = rewards[i]
else:
targets[i][actions[i]] = rewards[i] + self.gamma * (np.amax(target_values[i]))
# and do the model fit!
loss = self.model.fit(states, targets, verbose=0).history['loss'][0]
for i in range(self.batch_size):
self.record(actions[i], states[i], targets[i], target_values[i], loss / self.batch_size, rewards[i])
agent = DQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=300, print_delay=50)
agent.plot_state()
agent.plot_bellman_residual()
###Output
_____no_output_____
###Markdown
Double DQN Agent with Exploration and Experience ReplayNow we want to have two identical networks and keep frozen for some timesteps the one which is in charge of the evaluation (*i.e* which is used to compute the targets).Note that you can find some variants where the target network is updated at each timestep but with a small fraction of the difference with the policy network.
###Code
class DoubleDQNAgentWithExplorationAndReplay(DQNAgentWithExplorationAndReplay):
def __init__(self, observation_space, action_space):
super(DoubleDQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
# TODO: initialize a second model
self.target_model = self.build_model(trainable=False)
def update_target_model(self):
# copy weights from the model used for action selection to the model used for computing targets
self.target_model.set_weights(self.model.get_weights())
agent = DoubleDQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=200, print_delay=10)
agent.plot_diagnostics()
###Output
_____no_output_____
###Markdown
To observe actual performance of the policy we should set $\varepsilon=0$
###Code
agent.epsilon = 0
agent.memory = deque(maxlen=1)
agent.batch_size = 1
env.run(agent, episodes=200, print_delay=33)
agent.plot_diagnostics()
###Output
_____no_output_____
###Markdown
Duelling DQN If time allows, adapt the description from http://torch.ch/blog/2016/04/30/dueling_dqn.html to our setting
###Code
class DoubleDuelingDQNAgentWithExplorationAndReplay(DoubleDQNAgentWithExplorationAndReplay):
def __init__(self, observation_space, action_space):
super(DoubleDuelingDQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
def build_model(self, trainable=True):
value_input = Input(shape=(self.state_size,))
# Value stream
value_stream_hidden = Dense(???, input_dim=self.state_size, activation=???, trainable=trainable)(value_input)
value_stream_activation = Dense(1, activation=???, trainable=trainable)(value_stream_hidden)
repeat_value_stream = RepeatVector(self.action_size)(value_stream_activation)
value_stream = Flatten()(repeat_value_stream)
# Advantage stream
advantage_stream_hidden = Dense(???, input_dim=self.state_size, activation=???, trainable=trainable)(value_input)
advantage_stream_activation = Dense(self.action_size, activation=???, trainable=trainable)(advantage_stream_hidden)
advantage_stream = Lambda(lambda layer: layer - K.mean(layer))(advantage_stream_activation)
# Merge both streams
q_values = Add()([value_stream, advantage_stream])
model = Model(inputs=[value_input], outputs=q_values)
model.compile(loss=???, optimizer=???(lr=self.learning_rate))
model.summary()
return model
agent = DoubleDuelingDQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=300, print_delay=50)
agent.plot_diagnostics()
###Output
_____no_output_____
###Markdown
Data Munging: StringsData munging, the process of wrestling with data to make it into something clean and usable, is an important part of any job analyzing data.Today we're going to focus on some data that has information we want, but the information is not properly *structured*. In particular, it comes as a single column with a string value, and we want to turn it into a series of boolean columns.To do that, we're going to use the powerful built-in methods Python provides us to work with strings. You can read all about the available methods here: https://docs.python.org/3/library/string.htmlIn particular, we're going to use `.split()`, which is a method that turns a string into a list of strings, and `.strip()`, which removes the "whitespace" from a string.
###Code
# Play:
#
# Take a look at the official Python documentation for the
# "split" and "strip" methods. Play around with them now
# to make sure you understand how they work:
#
# 1)
# Read the data in a csv called "jobs.csv" into a DataFrame.
# This data is from a site that posts job ads online.
# Each row represents an ad for a job on the site.
#
# Take a look at your data and note that you have
# a column called `pay`. That column is a string,
# as far as Python is concerned. However, to us
# humans, we notice that the information is more
# structured than that. It seems like a "collection
# of keywords," where each job can have zero or more
# keywords such as "Part-Time" or "Contract" which
# describe the type of contract.
#
# There are 6 different contract types.
#
# Your goal:
# Transform the DataFrame, adding 6 boolean columns,
# one for each contract type, indicating whether or
# not that job has that contract type.
#
# NOTE: This is a relatively large task.
# Break it down into a series of steps, just like
# we did in the last exercises. Work on each
# step separately.
#
# Many of the steps will require to work with the
# string methods mentioned above.
#
# 2)
# Break down your tasks, write a "pipeline" function
# called "add_contract_types".
#
# HINT: last time, each "step" returned a DataFrame
# object. This might not be the case this time, the
# steps can return any data type that is helpful
# to move the to next step!
#
# 3)
# Now write all the "steps" (functions) needed
# by your pipeline function (add_contract_types)
#
# 4)
# Now add the needed columns by using your function
# add_contract_types. You will want the returned
# DataFrame for some of the further exercises.
#
# 5)
# Assume that all jobs that don't specify a contract
# type in "pay" are Full-time. Create a new column,
# called "Full-time", which is a boolean that
# should be True if the job is Full-time, false otherwise.
#
# 6)
# Get the percentage of jobs for each contract type
# i.e. number of jobs of X type / number of jobs
#
# 7)
# Which industries ('category') have the highest
# percentage of part-time jobs posted?
# The lowest?
#
# 8)
# Which industries ('category') have the highest
# percentage of Internship jobs posted?
# The lowest?
# Note: this question is very similar to the last.
# make a function that can answer both questions
#
# 9)
# Use your function to ask the same question about
# Comission jobs
#
# 10)
# Let's call jobs that are either Temporary,
# Part-time or Internships "precarious".
#
# Order the industries (category) by the
# percentage of precarious jobs
#
# HINT: can you modify some previous function
# to make this question easy to answer?
#
# HINT: Make sure your variables reflect their
# content. Collections should be plural, single
# elements should be singular.
#
# 11)
# Get the 5 companies who post the most jobs
# in each category, along with the number of
# jobs listed by each company.
# 12)
# Is any company in the top 5 across more than one categories??
# Return the companies who are, along with the categories
# in which they appear in the top 5.
#
# FORMAT: Dataframe with 3 columns: company, category, number of jobs
#
# HINT: take a look at the `.filter` method on GroupBy:
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html
###Output
_____no_output_____
###Markdown
Programming and Data Analysis> Homework 0Kuo, Yao-Jen from [DATAINPOINT](https://www.datainpoint.com) Instructions- We've imported necessary modules at the beginning of each exercise.- We've put necessary files(if any) in the working directory of each exercise.- We've defined the names of functions/inputs/parameters for you.- Write down your solution between the comments ` BEGIN SOLUTION` and ` END SOLUTION`.- It is NECESSARY to `return` the answer, tests will fail by just printing out the answer.- Do not use `input()` function, it will halt the notebook while running tests.- Running tests to see if your solutions are right: Kernel -> Restart & Run All -> Restart and Run All Cells.- You can run tests after each question or after finishing all questions.
###Code
import unittest
###Output
_____no_output_____
###Markdown
01. Define a function named `convert_fahrenheit_to_celsius(x)` which converts Fahrenheit degrees to Celsius degrees.\begin{equation}Celsius^{\circ} C = (Fahrenheit^{\circ} F - 32) \times \frac{5}{9}\end{equation}- Expected inputs:a numeric `x`.- Expected outputs:a numeric.
###Code
def convert_fahrenheit_to_celsius(x):
"""
>>> convert_fahrenheit_to_celsius(212)
100.0
>>> convert_fahrenheit_to_celsius(32)
0.0
"""
### BEGIN SOLUTION
C = (x - 32)*5/9
return C
### END SOLUTION
###Output
_____no_output_____
###Markdown
02. Define a function named `calculate_bmi(height, weight)` which calculates BMI according to heights in meters and weights in kilograms.\begin{equation}BMI = \frac{weight_{kg}}{height_{m}^2}\end{equation}Source: - Expected inputs:2 numerics `height` and `weight`.- Expected outputs:a numeric.
###Code
def calculate_bmi(height, weight):
"""
>>> calculate_bmi(216, 147) # Shaquille O'Neal in his prime
31.507201646090532
>>> calculate_bmi(206, 113) # LeBron James
26.628334433028563
>>> calculate_bmi(211, 110) # Giannis Antetokounmpo
24.70744143213315
"""
### BEGIN SOLUTION
BMI = weight / ((height * .01)**2)
return BMI
### END SOLUTION
###Output
_____no_output_____
###Markdown
03. Define a function named `show_big_mac_index(country, currency, price)` which returns the Big Mac Index given a country, its currency, and the price of a Big Mac. - Expected inputs:2 strings and a numeric.- Expected outputs:a string.
###Code
def show_big_mac_index(country, currency, price):
"""
>>> show_big_mac_index('US', 'USD', 5.65)
A Big Mac costs 5.65 USD in US.
>>> show_big_mac_index('South Korea', 'Won', 6520)
A Big Mac costs 6,520.00 Won in South Korea.
>>> show_big_mac_index('Taiwan', 'NTD', 72)
A Big Mac costs 72.00 NTD in Taiwan.
"""
### BEGIN SOLUTION
sentence = "A Big Mac costs {:0,.2f} {} in {}.".format(price, currency, country)
return sentence
### END SOLUTION
###Output
_____no_output_____
###Markdown
04. Define a function named `is_a_divisor(x, y)` which returns whether `x` is a is_a_divisor of `y` or not.- Expected inputs:2 integers.- Expected outputs:a boolean.
###Code
def is_a_divisor(x, y):
"""
>>> is_a_divisor(1, 3)
True
>>> is_a_divisor(2, 3)
False
>>> is_a_divisor(3, 3)
True
>>> is_a_divisor(1, 4)
True
>>> is_a_divisor(2, 4)
True
>>> is_a_divisor(3, 4)
False
>>> is_a_divisor(4, 4)
True
"""
### BEGIN SOLUTION
divisor = not(bool(y%x))
return divisor
### END SOLUTION
###Output
_____no_output_____
###Markdown
05. Define a function named `contains_vowels(x)` which returns whether x contains one of the vowels: a, e, i, o, u or not.- Expected inputs:a string.- Expected outputs:a boolean.
###Code
def contains_vowels(x):
"""
>>> contains_vowels('pythn')
False
>>> contains_vowels('ncnd')
False
>>> contains_vowels('rtclt')
False
>>> contains_vowels('python')
True
>>> contains_vowels('anaconda')
True
>>> contains_vowels('reticulate')
True
"""
### BEGIN SOLUTION
if x.find('a') != -1:
return True
elif x.find('e') != -1:
return True
elif x.find('i') != -1:
return True
elif x.find('o') != -1:
return True
elif x.find('u') != -1:
return True
else:
return False
### END SOLUTION
###Output
_____no_output_____
###Markdown
Run tests!Kernel -> Restart & Run All. -> Restart And Run All Cells.
###Code
class TestHomeworkZero(unittest.TestCase):
def test_01_convert_fahrenheit_to_celsius(self):
self.assertAlmostEqual(convert_fahrenheit_to_celsius(212), 100.0)
self.assertAlmostEqual(convert_fahrenheit_to_celsius(32), 0.0)
def test_02_calculate_bmi(self):
self.assertTrue(calculate_bmi(216, 147) > 31)
self.assertTrue(calculate_bmi(216, 147) < 32)
self.assertTrue(calculate_bmi(206, 113) > 26)
self.assertTrue(calculate_bmi(206, 113) < 27)
self.assertTrue(calculate_bmi(211, 110) > 24)
self.assertTrue(calculate_bmi(211, 110) < 25)
def test_03_show_big_mac_index(self):
self.assertEqual(show_big_mac_index('US', 'USD', 5.65), 'A Big Mac costs 5.65 USD in US.')
self.assertEqual(show_big_mac_index('South Korea', 'Won', 6520), 'A Big Mac costs 6,520.00 Won in South Korea.')
self.assertEqual(show_big_mac_index('Taiwan', 'NTD', 72), 'A Big Mac costs 72.00 NTD in Taiwan.')
def test_04_is_a_divisor(self):
self.assertTrue(is_a_divisor(1, 2))
self.assertTrue(is_a_divisor(2, 2))
self.assertTrue(is_a_divisor(1, 3))
self.assertFalse(is_a_divisor(2, 3))
self.assertTrue(is_a_divisor(1, 4))
self.assertTrue(is_a_divisor(2, 4))
self.assertFalse(is_a_divisor(3, 4))
self.assertTrue(is_a_divisor(4, 4))
def test_05_contains_vowels(self):
self.assertFalse(contains_vowels('pythn'))
self.assertFalse(contains_vowels('ncnd'))
self.assertFalse(contains_vowels('rtclt'))
self.assertTrue(contains_vowels('python'))
self.assertTrue(contains_vowels('anaconda'))
self.assertTrue(contains_vowels('reticulate'))
suite = unittest.TestLoader().loadTestsFromTestCase(TestHomeworkZero)
runner = unittest.TextTestRunner(verbosity=2)
test_results = runner.run(suite)
number_of_failures = len(test_results.failures)
number_of_errors = len(test_results.errors)
number_of_test_runs = test_results.testsRun
number_of_successes = number_of_test_runs - (number_of_failures + number_of_errors)
print("You've got {} successes among {} questions.".format(number_of_successes, number_of_test_runs))
###Output
You've got 5 successes among 5 questions.
###Markdown
Population Genetics*Önder Kartal, University of Zurich* This is a collection of elementary exercises that introduces you to the most fundamental concepts of population genetics. We use Python to explore these topics and solve problems.The exercises have been chosen for a one day workshop on modeling with 2.5 hours exercises preceded by approx. 3 hours of lectures (a primer on population genetics and probability theory). Evidently, it is not possible to cover a lot of material in this time; but upon finishing this workshop, you should feel comfortable picking up a textbook on population genetics and exploring the many software packages that are available for population genetics.__Note__: You can skip the exercises marked by an asterisk and tackle them if time permits. PreliminariesAll exercises can in principle be solved by only using the Python standard library and a plotting library. However, if you like and it feels more comfortable to you, you can use as well the libraries numpy and pandas. Note, that you have a link to the documentation of Python and standard scientific libraries in the "Help" menu of the Jupyter/IPython notebook.IPython has so-called [magic commands](http://ipython.readthedocs.org/en/stable/interactive/magics.html) (starting with %) to facilitate certain tasks. In our case, we want to import libraries for efficient handling of numeric data (numpy) and for plotting data (matplotlib). Evaluate the following two commands by pressing shift+enter in the cell; they import the necessary libraries and enable inline display of figures (it make take a few seconds).
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let us define two vector variables (a regular sequence and a random one) and print them.
###Code
x, y = np.arange(10), np.random.rand(10)
print(x, y, sep='\n')
###Output
[0 1 2 3 4 5 6 7 8 9]
[ 0.43191072 0.17692517 0.42616389 0.73142861 0.52226624 0.54028909
0.19863538 0.08375168 0.03776252 0.73497815]
###Markdown
The following command plots $y$ as a function of $x$ and labels the axes using $\LaTeX$.
###Code
plt.plot(x, y, linestyle='--', color='r', linewidth=2)
plt.xlabel('time, $t$')
plt.ylabel('frequency, $f$')
###Output
_____no_output_____
###Markdown
From the [tutorial](http://matplotlib.org/users/pyplot_tutorial.html): "matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB."__Comment__: The tutorial is a good starting point to learn about the most basic functionalities of matplotlib, especially if you are familiar with MATLAB. Matplotlib is a powerful library but sometimes too complicated for making statistical plots à la *R*. However, there are other libraries that, in part, are built on matplotlib and provide more convenient functionality for statistical use cases, especially in conjunction with the data structures that the library *pandas* provides (see [pandas](http://pandas.pydata.org/pandas-docs/stable/visualization.html), [seaborn](http://stanford.edu/~mwaskom/software/seaborn/), [ggplot](http://ggplot.yhathq.com/) and many more). Hardy-Weinberg EquilibriumThese exercises should make you comfortable with the fundamental notions of population genetics: allele and genotype frequencies, homo- and heterozygosity, and inbreeding.We will use data from a classical paper on enzyme polymorphisms at the alkaline phosphatase (ALP) locus in humans ([Harris 1966](http://www.jstor.org/stable/75451)). In this case, the alleles have been defined in terms of protein properties. Harris could electrophoretically distinguish three proteins by their migration speed and called them S (slow), F (fast), and I (intermediate).We use a Python [dictionary](https://docs.python.org/3.4/library/stdtypes.htmlmapping-types-dict) to store the observed numbers of genotypes at the ALP locus in a sample from the English people.
###Code
alp_genotype = {'obs_number':
{'SS': 141, 'SF': 111, 'FF': 28, 'SI': 32, 'FI': 15, 'II': 5}
}
###Output
_____no_output_____
###Markdown
Data Munging: StringsData munging, the process of wrestling with data to make it into something clean and usable, is an important part of any job analyzing data.Today we're going to focus on some data that has information we want, but the information is not properly *structured*. In particular, it comes as a single column with a string value, and we want to turn it into a series of boolean columns.To do that, we're going to use the powerful built-in methods Python provides us to work with strings. You can read all about the available methods here: https://docs.python.org/3/library/string.htmlIn particular, we're going to use `.split()`, which is a method that turns a string into a list of strings, and `.strip()`, which removes the "whitespace" from a string.
###Code
# Play:
#
# Take a look at the official Python documentation for the
# "split" and "strip" methods. Play around with them now
# to make sure you understand how they work:
import pandas as pd
#
# 1)
# Read the data in a csv called "jobs.csv" into a DataFrame.
# This data is from a site that posts job ads online.
# Each row represents an ad for a job on the site.
jobs = pd.read_csv('jobs.csv')
#
# Take a look at your data and note that you have
# a column called `pay`. That column is a string,
# as far as Python is concerned. However, to us
# humans, we notice that the information is more
# structured than that. It seems like a "collection
# of keywords," where each job can have zero or more
# keywords such as "Part-Time" or "Contract" which
# describe the type of contract.
#
# There are 6 different contract types.
contract_types = ['Part-time', 'Temporary', 'Internship', 'Contract', 'Commission', 'Other']
jobs['paysplit'] = pd.Series([str(n).split(', ') for n in jobs.pay])
for contract in contract_types:
jobs[contract] = pd.Series([contract in n for n in jobs.paysplit])
def fulltime(listo):
if 'nan' in listo:
return True
return False
jobs['Full-time'] = pd.Series([fulltime(n) for n in jobs.paysplit])
jobs['alljobs'] = pd.Series([True for n in jobs.paysplit])
jobs.head(30)
#
# 2)
# Break down your tasks, write a "pipeline" function
# called "add_contract_types".
#
# HINT: last time, each "step" returned a DataFrame
# object. This might not be the case this time, the
# steps can return any data type that is helpful
# to move the to next step!
#Did simplified code first...
#
# 3)
# Now write all the "steps" (functions) needed
# by your pipeline function (add_contract_types)
#Did simplified code first...
#
# 4)
# Now add the needed columns by using your function
# add_contract_types. You will want the returned
# DataFrame for some of the further exercises.
#Did simplified code first...
#
# 5)
# Assume that all jobs that don't specify a contract
# type in "pay" are Full-time. Create a new column,
# called "Full-time", which is a boolean that
# should be True if the job is Full-time, false otherwise.
# Added to original code
#
# 6)
# Get the percentage of jobs for each contract type
# i.e. number of jobs of X type / number of jobs
proportions = jobs.loc[:,'Part-time':'Full-time'].sum()/jobs.loc[:,'Part-time':'Full-time'].sum().sum()*100
proportions
#
# 7)
# Which industries ('category') have the highest
# percentage of part-time jobs posted?
# The lowest?
share_parttime = (jobs.groupby('category').sum()['Part-time']/jobs.loc[:,'Part-time':'Full-time'].sum().sum()*100).sort_values(ascending= False)
print('Highest\n', share_parttime.head(5))
print('Lowest\n', share_parttime.tail(5))
#
# 8)
# Which industries ('category') have the highest
# percentage of Internship jobs posted?
# The lowest?
# Note: this question is very similar to the last.
# make a function that can answer both questions
def industry_share(df, job_type):
share = (df.groupby('category').sum()[job_type]/df.loc[:,'Part-time':'Full-time'].sum().sum()*100).sort_values(ascending= False)
print('Highest\n', share.head(5))
print('Lowest\n', share.tail(5))
industry_share(jobs,'Internship')
#
# 9)
# Use your function to ask the same question about
# Comission jobs
industry_share(jobs, 'Commission')
#
# 10)
# Let's call jobs that are either Temporary,
# Part-time or Internships "precarious".
#
# Order the industries (category) by the
# percentage of precarious jobs
#
# HINT: can you modify some previous function
# to make this question easy to answer?
#
# HINT: Make sure your variables reflect their
# content. Collections should be plural, single
# elements should be singular.
precarious_shares = (jobs.groupby('category').sum()[['Internship', 'Part-time', 'Temporary']]/jobs.loc[:,'Part-time':'Full-time'].sum().sum()*100)
precarious = (precarious_shares['Internship'] + precarious_shares['Part-time'] + precarious_shares['Temporary']).sort_values(ascending = False)
precarious
#
# 11)
# Get the 5 companies who post the most jobs
# in each category, along with the number of
# jobs listed by each company.
job_counts = jobs.groupby('company').sum()
job_types = ['Part-time', 'Temporary', 'Internship', 'Contract', 'Commission', 'Other', 'Full-time', 'alljobs']
for n in job_types:
print(job_counts[n].sort_values(ascending=False)[0:5])
# 12)
# Is any company in the top 5 across more than one categories??
# Return the companies who are, along with the categories
# in which they appear in the top 5.
#
# FORMAT: Dataframe with 3 columns: company, category, number of jobs
job_counts = jobs.groupby('company').sum()
job_types = ['Part-time', 'Temporary', 'Internship', 'Contract', 'Commission', 'Other', 'Full-time', 'alljobs']
top_jobs = []
for n in job_types:
top_jobs += [pd.DataFrame(job_counts[n].sort_values(ascending=False)[0:5])]
top_jobs = pd.concat(top_jobs).reset_index()
appearances = pd.DataFrame(top_jobs.groupby('company').count().sum(axis = 1)).reset_index()
appearances.columns = ['company', 'appearances']
appearances = appearances[appearances.appearances > 1]
appearances = appearances.merge(top_jobs, on = 'company', how = 'inner').melt(id_vars = ['company', 'appearances'], var_name = 'job_type', value_name = 'no_jobs')
appearances[pd.notnull(appearances.no_jobs)].sort_values('company')
# HINT: take a look at the `.filter` method on GroupBy:
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html
###Output
_____no_output_____
###Markdown
Working with APIs in PythonIn Python, there are many libraries to make HTTP requests. We will use a 3rd-party library called "requests", which is very easy to use and very popular. Making a "GET" request is as simple as: ```pythonimport requestsres = requests.get(url) returns a "Response" objectres.content has the "body" of the response```You might need to install the requests library! You can do that with the following code in a Jupyter cell: ```python! pip install requests```Or, if you're using anaconda, optionally you can also do: ```python! conda install -c anaconda requests``` Pokemon APIThere is a simple, open API called "pokeapi" that allows us to make requests and see how to use APIs. Like everything, we first look at the documentation: https://pokeapi.co/docs/v2.htmlThe video below will walk you through how to read the documentation page.
###Code
from IPython.lib.display import YouTubeVideo
YouTubeVideo('5-li5umLyGM', width=640,height=360)
# Let's see how to make a GET request to the API:
import requests
# let's take a look at the "Pokemon" resource
res = requests.get('https://pokeapi.co/api/v2/pokemon')
# the .json() method on the Response class essentially
# wraps a call to `json.loads` on the response body
# for us:
res.json()
# Exercise 1:
# Create a Dataframe with all the Pokemon names and the URL
# that can be used to get detailed information about them via
# the API:
#
# HINT: Take a look at the "next" property in the JSON API
# response.
# name | url |
# -----------|-----------------------
# bulbasaur | https://pokeapi.co/api/v2/pokemon/1/
# ...
# squirtle | https://pokeapi.co/api/v2/pokemon/7/
# ...
###Output
_____no_output_____
###Markdown
Exercises Schools per district
###Code
from riko.collections.sync import SyncPipe
count_conf = {}
stream = (
SyncPipe('', conf={})
.count(conf={})
.output)
list(stream)
###Output
_____no_output_____
###Markdown
Boarding only enrollment per district
###Code
filter_conf = {'rule': {'field': '', 'op': 'is', 'value': ''}}
stream = (
SyncPipe('fetchdata', conf={})
.filter(conf=filter_conf)
.sum(conf={})
.output)
list(stream)
###Output
_____no_output_____
###Markdown
Tax Revenues (Income!) in BarcelonaOpen Data Barcelona provides lots of fun data about our city.You can access it here: https://opendata-ajuntament.barcelona.catWe will be examining average tax returns per neighborhood ("barri") in the years 2016 and 2015. Tax revenues are, naturally, a proxy for income, so we're really looking at home (taxable) income varies across the city.The columns are in Catalan, so here's a quick explanation in English: Any = YearCodi_Districte = District CodeNom_Districte = District NameCodi_Barri = Neighborhood CodeNom_Barri = Neighborhood NameSeccio_Censal = Cenus Tract NumberImport_Euros = Tax Revenue (average over all individuals in the census tract)
###Code
# Let's begin by reading the file "2016_renda.csv"
# into a DataFrame:
#
# 1)
# Get the (5) barris with the highest average tax revenues
# (i.e. average over the census tracts in each barri)
#
# 2)
# Get the difference in mean revenue between the
# poorest census tract and the richest, within
# each district.
#
# You should return a DataFrame with 2 columns:
# The district name and the difference in reveneue.
###Output
_____no_output_____
###Markdown
Planning Your AttackOne pattern to make your code more legible, and to make it easier to break down big problems, is to ensure that your code can be read on two levels: one "declarative" level, where someone can read (or write) *what* will happen and another "imperative level", where someone can read (or write!) *how* the thing is happening.Data preparation often involves a "pipeline", a uni-directional flow of transformations where the data is moved, one step at a time, towards the final format.It's important, when you try to create a pipline, which can be a big problem, to make a plan.One way to make a plan is to start from the final goal, and ask write out the following statement: 1. "If I had ________ (INPUT), then it would be easy to make [FINAL GOAL], I would just need to ________ (step)."Where you should think of INPUT as "data ______ in data structure ______".That will be the final step of your pipeline. Now repeat the statement, with the FINAL GOAL being replaced with the INPUT of the previous step: 2. "If I had ________ (INPUT), then it would be easy to make [PREVIOUS INPUT], I would just need to ________ (step)."Let's see an example of this method of planning by working out an exercise:
###Code
#
# Your goal will be the following:
#
# We want to understand the income variation
# (or "spatial inequality") within each "barri".
# However, each barri is a different size.
# Larger barris will naturally have a greater
# variation, even if there isn't great variation
# between one block and the next, which is what
# we want to understand with spatial inequality.
# To deal with this, we will apply a naive solution
# of simply using the number of census tracts as
# a proxy for "physical size" of the barri. We
# will then divide the income gap (difference between
# lowest and highest income tract) within each barri
# by the number of tracts as a way to "control for size".
# This will be our measure of "spatial inequality".
#
# Your job is to return a dataframe sorted by
# spatial inequality, with any barri with one
# tract (0 inequality) removed.
#
#
# We will try to lay out a plan to solve the problem
# at hand with the process we just went over:
# 1. If I had a <<an extra column on the dataframe of
# the income gap divided by the number of tracts>>
# then it would be easy to <<get the barris with
# highest and lowest normalized income gap>>, I
# would just need to <<sort the dataframe by that
# column>>>.
#
# 2. If I had << A. a column for the income gap and
# B. a column for the number of tracts in a barri>>
# then it would be easy to make << an extra column on the
# dataframe of the income gap divided by the number of tracts>>
# I would just need to <<divide one column by the other>>.
#
#3b. If I had <<the raw data>>, then it would be easy to make
# <<a column with the number of tracts>>, I would just need
# to <<count the number of tracts per barri>>.
#
#3a. If I had <<the raw data>>, then it would be easy to make
# <<a column with the income gap>>, I would just need to
# <<calculate the income difference between tracts in each
# barri>>.
#
# Now we can use this outline to write a declarative pipeline
# function (in the opposite order of the steps we wrote):
def spatial_inequality_in_barcelona(df):
df = add_income_diff_for_barris(df)
df = add_num_tracts_per_barri(df)
df = add_inequality(df)
return inequality_by_barri(df)
# In the next exercises, you will write each of those functions,
# and in the end, use this function to compare barris based on
# their spatial inequality.
#
# 3)
# Write the function: "add_income_diff_for_barris"
#
# HINT: Make sure the returned dataframe is the
# same size as the original!
#
#
# 4)
# Create the function: "add_num_tracts_per_barri"
#
# 5)
# Create the function: "add_inequality"
#
# 6)
# Add the function "inequality_by_barri"
#
# Note that this function should probably
# make sure that the dataframe has the
# same number of rows as number of barris
# (i.e. one barri per row).
#
# Also note that some barris have an inequality
# of 0, let's go ahead and remove them!
#
# 7)
# Try out the function we wrote out in the planning
# phase, spatial_inequality_in_barcelona,
# does it work when given the raw data?
#
# Now let's go ahead and "refactor"
# "Refactoring" means rewriting the code without
# changing the functionality. What we wrote works,
# and is great and legible.
#
# But maybe breaking it down into so many separate
# steps, while didactic, could be considered overkill
# and maybe isn't the most efficient. You probably
# grouped by "Nom_Barri" at least 3 separate times!
#
# Try to rewrite the function spatial_inequality_in_barcelona
# to be more efficient (to only groupby Nom_Barri once!)
# and a bit shorter.
# Open Data Barcelona provides the tax data for years
# 2015 and 2016 in different csv's. Read in the tax data
# for year 2015 so we can see how incomes have changed
# between the years.
#
# 8)
# Get the growth of the mean tax reveneue per census
# tract. Create a DataFrame that has the district, barri,
# and census tract as well as the difference in revenue
# between the years for each tract.
#
# Sort by the difference per tract.
#
# 9)
# Get the mean growth per barri.
# Sort by mean growth.
###Output
_____no_output_____
###Markdown
Exercises Lowest crime per province (pure python) Description Print the lowest crime activity for each province from the 'filtered-crime-stats' data. Results should look as follows:
###Code
'FS'
('All theft not mentioned elsewhere', 2940)
'GP'
('Drug-related crime', 5229)
'KZN'
('Drug-related crime', 4571)
'WC'
('Common assault', 2188)
###Output
_____no_output_____
###Markdown
Hint
###Code
from csv import DictReader
from io import open
from os import path as p
from itertools import groupby
from operator import itemgetter
url = p.abspath('filtered-crime-stats.csv')
f = open(url)
# ...
grouped = [('key', [])]
for key, group in grouped:
print(key)
# ...
sub_grouped = [('sub_key', [])]
low_count, low_key = 0, None
for sub_key, sub_group in sub_grouped:
pass
print((low_key, low_count))
###Output
key
(None, 0)
###Markdown
Lowest crime per province (meza) Description Now perfrom the same task using meza. Results should look as follows:
###Code
{'Police Station': 'Park Road', 'Incidents': 2940, 'Province': 'FS', 'Crime': 'All theft not mentioned elsewhere', 'Year': 2014}
{'Police Station': 'Eldorado Park', 'Incidents': 5229, 'Province': 'GP', 'Crime': 'Drug-related crime', 'Year': 2014}
{'Police Station': 'Durban Central', 'Incidents': 4571, 'Province': 'KZN', 'Crime': 'Drug-related crime', 'Year': 2014}
{'Police Station': 'Mitchells Plain', 'Incidents': 2188, 'Province': 'WC', 'Crime': 'Common assault', 'Year': 2014}
###Output
_____no_output_____
###Markdown
Hint
###Code
from meza.io import read_csv
from meza.process import group, detect_types, type_cast
# ...
grouped = [('key', [])]
for key, _group in grouped:
sub_grouped = [('sub_key', [])]
# ...
print({})
###Output
{}
|
examples/user_guide/Continuous_Coordinates.ipynb | ###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
np.set_printoptions(precision=2, linewidth=80)
hv.Dimension.type_formatters[np.float64] = '%.2f'
%opts HeatMap (cmap="hot")
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, f5[4-y,x]) for x in range(0,5) for y in range(0,5)], label="H5")
r5+i5+h5
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](09-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](09-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of $x$ values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____
###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
np.set_printoptions(precision=2, linewidth=80)
opts.defaults(opts.Layout(shared_axes=False))
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, round(f5[4-y,x],2)) for x in range(0,5) for y in range(0,5)], label="H5")
h5_labels = hv.Labels(h5).opts(padding=0)
r5 + i5 + h5*h5_labels
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. ``QuadMesh`` elements are similar but allow more general types of mapping between the underlying array and the continuous space, with arbitrary spacing along each of the axes or even over the entire array. See the ``QuadMesh`` element for more details.Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10.opts(height=75)
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of `x` values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____
###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
np.set_printoptions(precision=2, linewidth=80)
opts.defaults(opts.HeatMap(cmap='fire'), opts.Layout(shared_axes=False))
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, f5[4-y,x]) for x in range(0,5) for y in range(0,5)], label="H5")
r5+i5+h5
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. ``QuadMesh`` elements are similar but allow more general types of mapping between the underlying array and the continuous space, with arbitrary spacing along each of the axes or even over the entire array. See the ``QuadMesh`` element for more details.Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10.opts(height=75)
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of `x` values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____
###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
np.set_printoptions(precision=2, linewidth=80)
opts.defaults(opts.HeatMap(cmap='fire'), opts.Layout(shared_axes=False))
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, f5[4-y,x]) for x in range(0,5) for y in range(0,5)], label="H5")
r5+i5+h5
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. ``QuadMesh`` elements are similar but allow more general types of mapping between the underlying array and the continuous space, with arbitrary spacing along each of the axes or even over the entire array. See the ``QuadMesh`` element for more details.Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10.opts(height=75)
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of $x$ values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____
###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
np.set_printoptions(precision=2, linewidth=80)
%opts HeatMap (cmap="hot")
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, f5[4-y,x]) for x in range(0,5) for y in range(0,5)], label="H5")
r5+i5+h5
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](09-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](09-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of $x$ values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____
###Markdown
Continuous Coordinates HoloViews is designed to work with scientific and engineering data, which is often in the form of discrete samples from an underlying continuous system. Imaging data is one clear example: measurements taken at a regular interval over a grid covering a two-dimensional area. Although the measurements are discrete, they approximate a continuous distribution, and HoloViews provides extensive support for working naturally with data of this type. 2D Continuous spaces In this user guide we will show the support provided by HoloViews for working with two-dimensional regularly sampled grid data like images, and then in subsequent sections discuss how HoloViews supports one-dimensional, higher-dimensional, and irregularly sampled data with continuous coordinates.
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
np.set_printoptions(precision=2, linewidth=80)
opts.defaults(opts.HeatMap(cmap='fire'), opts.Layout(shared_axes=False))
###Output
_____no_output_____
###Markdown
First, let's consider: ||||:--------------:|:----------------|| **``f(x,y)``** | a simple function that accepts a location in a 2D plane specified in millimeters (mm) || **``region``** | a 1mm×1mm square region of this 2D plane, centered at the origin, and || **``coords``** | a function returning a square (s×s) grid of (x,y) coordinates regularly sampling the region in the given bounds, at the centers of each grid cell |||||
###Code
def f(x,y):
return x+y/3.1
region=(-0.5,-0.5,0.5,0.5)
def coords(bounds,samples):
l,b,r,t=bounds
hc=0.5/samples
return np.meshgrid(np.linspace(l+hc,r-hc,samples),
np.linspace(b+hc,t-hc,samples))
###Output
_____no_output_____
###Markdown
Now let's build a Numpy array regularly sampling this function at a density of 5 samples per mm:
###Code
f5=f(*coords(region,5))
f5
###Output
_____no_output_____
###Markdown
We can visualize this array (and thus the function ``f``) either using a ``Raster``, which uses the array's own integer-based coordinate system (which we will call "array" coordinates), or an ``Image``, which uses a continuous coordinate system, or as a ``HeatMap`` labelling each value explicitly:
###Code
r5 = hv.Raster(f5, label="R5")
i5 = hv.Image( f5, label="I5", bounds=region)
h5 = hv.HeatMap([(x, y, f5[4-y,x]) for x in range(0,5) for y in range(0,5)], label="H5")
r5+i5+h5
###Output
_____no_output_____
###Markdown
Both the ``Raster`` and ``Image`` ``Element`` types accept the same input data and show the same arrangement of colors, but a visualization of the ``Raster`` type reveals the underlying raw array indexing, while the ``Image`` type has been labelled with the coordinate system from which we know the data has been sampled. All ``Image`` operations work with this continuous coordinate system instead, while the corresponding operations on a ``Raster`` use raw array indexing.For instance, all five of these indexing operations refer to the same element of the underlying Numpy array, i.e. the second item in the first row:
###Code
"r5[0,1]=%0.2f r5.data[0,1]=%0.2f i5[-0.2,0.4]=%0.2f i5[-0.24,0.37]=%0.2f i5.data[0,1]=%0.2f" % \
(r5[1,0], r5.data[0,1], i5[-0.2,0.4], i5[-0.24,0.37], i5.data[0,1])
###Output
_____no_output_____
###Markdown
You can see that the ``Raster`` and the underlying ``.data`` elements both use Numpy's raw integer indexing, while the ``Image`` uses floating-point values that are then mapped onto the appropriate array element.This diagram should help show the relationships between the ``Raster`` coordinate system in the plot (which ranges from 0 at the top edge to 5 at the bottom), the underlying raw Numpy integer array indexes (labelling each dot in the **Array coordinates** figure), and the underlying **Continuous coordinates**: Array coordinatesContinuous coordinates Importantly, although we used a 5×5 array in this example, we could substitute a much larger array with the same continuous coordinate system if we wished, without having to change any of our continuous indexes -- they will still point to the correct location in the continuous space:
###Code
f10=f(*coords(region,10))
f10
r10 = hv.Raster(f10, label="R10")
i10 = hv.Image(f10, label="I10", bounds=region)
r10+i10
###Output
_____no_output_____
###Markdown
The image now has higher resolution, but still visualizes the same underlying continuous function, now evaluated at 100 grid positions instead of 25: Array coordinatesContinuous coordinates Indexing the exact same coordinates as above now gets very different results:
###Code
"r10[0,1]=%0.2f r10.data[0,1]=%0.2f i10[-0.2,0.4]=%0.2f i10[-0.24,0.37]=%0.2f i10.data[0,1]=%0.2f" % \
(r10[1,0], r10.data[0,1], i10[-0.2,0.4], i10[-0.24,0.37], i10.data[0,1])
###Output
_____no_output_____
###Markdown
The array-based indexes used by ``Raster`` and the Numpy array in ``.data`` still return the second item in the first row of the array, but this array element now corresponds to location (-0.35,0.4) in the continuous function, and so the value is different. These indexes thus do *not* refer to the same location in continuous space as they did for the other array density, because raw Numpy-based indexing is *not* independent of density or resolution.Luckily, the two continuous coordinates still return very similar values to what they did before, since they always return the value of the array element corresponding to the closest location in continuous space. They now return elements just above and to the right, or just below and to the left, of the earlier location, because the array now has a higher resolution with elements centered at different locations. Indexing in continuous coordinates always returns the value closest to the requested value, given the available resolution. Note that in the case of coordinates truly on the boundary between array elements (as for -0.2,0.4), the bounds of each array cell are taken as right exclusive and upper exclusive, and so (-0.2,0.4) returns array index (3,0). Slicing in 2D In addition to indexing (looking up a value), slicing (selecting a region) works as expected in continuous space (see the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide for more explanation). For instance, we can ask for a slice from (-0.275,-0.0125) to (0.025,0.2885) in continuous coordinates:
###Code
sl10=i10[-0.275:0.025,-0.0125:0.2885]
sl10.data
sl10
###Output
_____no_output_____
###Markdown
This slice has selected those array elements whose centers are contained within the specified continuous space. To do this, the continuous coordinates are first converted by HoloViews into the floating-point range (5.125,2.250) (2.125,5.250) of array coordinates, and all those elements whose centers are in that range are selected: Array coordinatesContinuous coordinates Slicing also works for ``Raster`` elements, but it results in an object that always reflects the contents of the underlying Numpy array (i.e., always with the upper left corner labelled 0,0):
###Code
r5[0:3,1:3] + r5[0:3,1:2]
###Output
_____no_output_____
###Markdown
Hopefully these examples make it clear that if you are using data that is sampled from some underlying continuous system, you should use the continuous coordinates offered by HoloViews objects like ``Image`` so that your programs can be independent of the resolution or sampling density of that data, and so that your axes and indexes can be expressed naturally, using the actual units of the underlying continuous space. The data will still be stored in the same Numpy array, but now you can treat it consistently like the approximation to continuous values that it is. 1D and nD Continuous coordinates All of the above examples use the common case for visualizations of a two-dimensional regularly gridded continuous space, which is implemented in ``holoviews.core.sheetcoords.SheetCoordinateSystem``. Similar continuous coordinates and slicing are also supported for ``Chart`` elements, such as ``Curve``s, but using a single index and allowing arbitrary irregular spacing, implemented in ``holoviews.elements.chart.Chart``. They also work the same for the n-dimensional coordinates and slicing supported by the [container](Containers) types ``HoloMap``, ``NdLayout``, and ``NdOverlay``, implemented in ``holoviews.core.dimension.Dimensioned`` and again allowing arbitrary irregular spacing. Together, these powerful continuous-coordinate indexing and slicing operations allow you to work naturally and simply in the full *n*-dimensional space that characterizes your data and parameter values. Sampling The above examples focus on indexing and slicing, but as described in the [Indexing and Selecting](10-Indexing_and_Selecting.ipynb) user guide there is another related operation supported for continuous spaces, called sampling. Sampling is similar to indexing and slicing, in that all of them can reduce the dimensionality of your data, but sampling is implemented in a general way that applies for any of the 1D, 2D, or nD datatypes. For instance, if we take our 10×10 array from above, we can ask for the value at a given location, which will come back as a ``Table``, i.e. a dictionary with one (key,value) pair:
###Code
e10=i10.sample(x=-0.275, y=0.2885)
e10.opts(height=75)
###Output
_____no_output_____
###Markdown
Similarly, if we ask for the value of a given *y* location in continuous space, we will get a ``Curve`` with the array row closest to that *y* value in the ``Image`` 2D array returned as arrays of $x$ values and the corresponding *z* value from the image:
###Code
r10=i10.sample(y=0.2885)
r10
###Output
_____no_output_____ |
Autoencoders/Autoencoders_solutions.ipynb | ###Markdown
Written by [Samuel Adekunle](mailto:[email protected])For [AI Core](http://www.theaicore.com) Introduction to Autoencoders Uses of Autoencoders Image/Audio DenoisingAutoencoders are very good at removing noise from images and generating a much clearer picture than the original. Later we will see how this can easily be implemented.![image](img/denoising_example.png) Image GenerationAn alternative to GANs are a variant of autoencoders known as [Variational Autoencoders](https://en.wikipedia.org/wiki/AutoencoderVariational_autoencoder_(VAE)). There's a lot of complicated math involved but in summarhy, te input is an image, and the variational autoencoder learns it's distribution and can generate similar images.![faces generated with a vae](img/faces.png)*Faces generated with a Variational Autoencoder Model (source: [Wojciech Mormul on Github](https://github.com/WojciechMormul/vae))* Image Inpainting and Photo Restoration![context encoders](img/inpainting.jpg)*Faces generated with a Variational Autoencoder Model (source: [Context Encoders: Feature Learning by Inpainting](https://people.eecs.berkeley.edu/~pathak/context_encoder/))* Other Uses: - Anomaly Detection and Facial Recogniton - Feature Extraction and Data Compression - Language Translation Autoencoder Basic ArchitectureAn [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder) is a neural network architecture that learns efficient data encodings in an unsupervised manner. What this means is autoencoders learn to recognise the most important features of the data they are fed, and reject the less important ones (i.e. noise). In doing so, they can reduce the dimensionality of the number of features needed to represent the same data. It does this in two steps: - Data Encoding: The input data is forced through a bottleneck and transfomed into a feature space, which is typically much smaller than the input space. The encoder is trained so that this feature space represents the most important features in the input space that are needed to reconstruct the data. Note: If the feature space is not smaller than the input space, then the encoder might just learn the identity function. - Data Decoding: After the input data has been reduced to some feature space, the autoencoder tries to reconstruct the original data from the reduced feature space. This is why an autoencoder is often said to undergo **unsupervised training**. The original input data is what is compared against the output of the network and used to train it. Typically in training the autoencoder, the network tries to minimize a reconstruction loss, such as the Mean Squared Error between the input and the output.![image](img/transitions.png)*Mathematical Definition of an Autoencoder (source: [Wikipedia](https://en.wikipedia.org/wiki/Autoencoder))* Feed-Forward AutoencoderThis basic architechture will take the input and try to reproduce it at the output.![feed_foward_autoencoder](img/encoder_decoder.png)*Basic Reconstruction Autoencoder Architecture (source: [Jeremy Jordan](https://www.jeremyjordan.me/autoencoders/))*
###Code
# All requirements for this notebook
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import matplotlib.pyplot as plt
import numpy as np
SEED = 5000
torch.manual_seed(SEED)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# We will be using popular MNIST dataset
train_data = torchvision.datasets.MNIST(root='MNIST-data',
transform=torchvision.transforms.ToTensor(),
train=True,
download=True
)
test_data = torchvision.datasets.MNIST(root='MNIST-data',
transform=torchvision.transforms.ToTensor(),
train=False
)
print(f"Shape of MNIST Training Dataset: {train_data.data.shape}")
print(f"Shape of MNIST Testing Dataset: {test_data.data.shape}")
def show_image_helper(image):
image = image.view(28, 28)
plt.imshow(image.cpu().detach())
plt.show()
print("Max Element: ", rdm_img.max())
print("Min Element: ", rdm_img.min())
def show_losses_helper(losses):
plt.plot(losses[1:])
plt.ylabel("Losses")
plt.xlabel("Epochs")
plt.title("Autoencoder Losses")
plt.show()
# What are we working with and what will we be doing
rdm_img = train_data.data[np.random.randint(
0, 100)] / 255.0 # get a random example
show_image_helper(rdm_img)
# FURTHER SPLIT THE TRAINING INTO TRAINING AND VALIDATION
train_data, val_data = torch.utils.data.random_split(train_data, [
50000, 10000])
BATCH_SIZE = 128
# MAKE TRAINING DATALOADER
train_loader = torch.utils.data.DataLoader( # create a data loader
train_data, # what dataset should it sample from?
shuffle=True, # should it shuffle the examples?
batch_size=BATCH_SIZE # how large should the batches that it samples be?
)
# MAKE VALIDATION DATALOADER
val_loader = torch.utils.data.DataLoader(
val_data,
shuffle=True,
batch_size=BATCH_SIZE
)
# MAKE TEST DATALOADER
test_loader = torch.utils.data.DataLoader(
test_data,
shuffle=True,
batch_size=BATCH_SIZE
)
class AutoEncoder(nn.Module):
def __init__(self, input_size, hidden_size, code_size):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(input_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, code_size),
nn.ReLU()
)
self.decoder = nn.Sequential(
nn.Linear(code_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, input_size),
nn.Sigmoid()
)
def forward(self, x):
return self.decoder(self.encoder(x))
def train(model, num_epochs=10, learning_rate=0.01):
global EPOCHS
model.train()
losses = []
optimiser = torch.optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.BCELoss()
# criterion = nn.MSELoss()
for epoch in range(num_epochs):
EPOCHS += 1
total_loss = 0
num_batches = 0
for org_img, _ in train_loader:
optimiser.zero_grad()
org_img = org_img.double().view(-1, 784).to(device) / 255.0
gen_img = model(org_img).double()
loss = criterion(gen_img, org_img)
total_loss += loss
num_batches += 1
loss.backward() # backpropagate
optimiser.step()
average_loss = total_loss / num_batches
losses.append(average_loss)
print(f"Epoch {EPOCHS}:\tScore: {1/average_loss}")
return losses
EPOCHS = 0
INPUT_SIZE = 28*28
HIDDEN_SIZE = 128
CODE_SIZE = 32
LEARNING_RATE = 0.01
autoencoder = AutoEncoder(
INPUT_SIZE, HIDDEN_SIZE, CODE_SIZE).double().to(device)
num_epochs = 25
losses = train(autoencoder, num_epochs, LEARNING_RATE)
show_losses_helper(losses)
def validate(model):
model.eval()
criterion = torch.nn.BCELoss()
# criterion = torch.nn.MSELoss()
total_loss = 0
num_batches = 0
for val_img, _ in val_loader:
val_img = val_img.double().view(-1, 784).to(device) / 255.0
gen_img = model(val_img).double()
loss = criterion(gen_img, val_img)
total_loss += loss
num_batches += 1
average_loss = total_loss / num_batches
return 1/average_loss.item()
score = validate(autoencoder)
print("Score: ", score)
def test(model):
model.eval()
criterion = torch.nn.BCELoss()
# criterion = torch.nn.MSELoss()
total_loss = 0
num_batches = 0
stored_images = []
for test_img, _ in test_loader:
test_img = test_img.double().view(-1, 784).to(device) / 255.0
gen_img = model(test_img)
loss = criterion(gen_img.double(), test_img).item()
total_loss += loss
num_batches += 1
if np.random.random() > 0.90:
stored_images.append(
(test_img[0].clone().detach(), gen_img[0].clone().detach()))
score = average_loss = total_loss / num_batches
print(f"Score: {1/score}\n")
for original, generated in stored_images:
print("Original: ")
show_image_helper(original)
print("Generated: ")
show_image_helper(generated)
test(autoencoder)
###Output
Score: 271.5048760406239
Original:
###Markdown
Comparing MSE to BCEGenerally, when dealing with Autoencoders or similar problems, we train using a loss like MSE which would compare the generated image and the original one, pixel by pixel in order to calculate the error. This is fine most of the time, but would not have been optimal in our case. Our images have values varying only between 0 and 1 and most of them are zero anyways, so this means the mean square error will always be very low, which will not allow our model to train effectively.![mean_square_error_loss](img/mse_losses.png)The alternative we used was the Binary Cross Entropy Error. Typically this is used for categorical problems, but in our case we are trying to distinguish between a high (1.0) and a low(0.0) so the cross entropy loss can still be used. Because our numbers are between 0 and 1 we use a binary cross entropy.![binary_cross_entropy_loss](img/bce.png) Application - Denoising an Image This adds some noise to the input before passing it in to the autoencoder network but uses the original image as the ground truth, effectively training the autoencoder network to reject the noise and learn the data encodings that represent the data beneath the noise. The only difference is in the training loop![denoising_autoencoder_architecture](img/denoising.png)*Denoising Autoencoder Architecture (source: [Jeremy Jordan](https://www.jeremyjordan.me/autoencoders/))*
###Code
def add_noise(clean_image, noise_factor=0.0):
random_noise = torch.randn_like(clean_image)
random_noise /= random_noise.max() # between -1 and 1
noisy_image = clean_image + (noise_factor * random_noise)
return noisy_image
def train_noise(model, num_epochs=10, learning_rate=0.01, noise_factor=0.0):
global EPOCHS
model.train()
losses = []
optimiser = torch.optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.BCELoss()
# criterion = nn.MSELoss()
for _ in range(num_epochs):
EPOCHS += 1
total_loss = 0
num_batches = 0
for org_img, _ in train_loader:
optimiser.zero_grad()
org_img = org_img.double().view(-1, 784).to(device) / 255.0
noisy_img = add_noise(org_img, noise_factor)
gen_img = model(noisy_img).double()
loss = criterion(gen_img, org_img)
total_loss += loss
num_batches += 1
loss.backward() # backpropagate
optimiser.step()
average_loss = total_loss / num_batches
losses.append(average_loss)
print(f"Epoch {EPOCHS}:\tScore: {1/average_loss}")
return losses
EPOCHS = 0
INPUT_SIZE = 28*28
HIDDEN_SIZE = 128
CODE_SIZE = 32
LEARNING_RATE = 0.01
NOISE_FACTOR = 0.001
denoise_autoencoder = AutoEncoder(
INPUT_SIZE, HIDDEN_SIZE, CODE_SIZE).double().to(device)
num_epochs = 25
losses = train_noise(denoise_autoencoder, num_epochs, LEARNING_RATE, NOISE_FACTOR)
show_losses_helper(losses)
def validate_noise(model, noise_factor=NOISE_FACTOR):
model.eval()
criterion = torch.nn.BCELoss()
# criterion = torch.nn.MSELoss()
total_loss = 0
num_batches = 0
for val_img, _ in val_loader:
val_img = val_img.double().view(-1, 784).to(device) / 255.0
gen_img = model(add_noise(val_img, noise_factor)).double()
loss = criterion(gen_img, val_img)
total_loss += loss
num_batches += 1
average_loss = total_loss / num_batches
return 1/average_loss.item()
score = validate_noise(denoise_autoencoder)
print("Score: ", score)
def test_noise(model, noise_factor=NOISE_FACTOR):
model.eval()
criterion = torch.nn.BCELoss()
# criterion = torch.nn.MSELoss()
total_loss = 0
num_batches = 0
stored_images = []
for test_img, _ in test_loader:
test_img = test_img.double().view(-1, 784).to(device) / 255.0
noisy_img = add_noise(test_img, noise_factor)
gen_img = model(noisy_img).double()
loss = criterion(gen_img, test_img)
total_loss += loss
num_batches += 1
if np.random.random() > 0.90:
stored_images.append((test_img[0].clone().detach(
), noisy_img[0].clone().detach(), gen_img[0].clone().detach()))
score = average_loss = total_loss / num_batches
print(f"Score: {1/score}\n")
for original, noisy, generated in stored_images:
print("Original: ")
show_image_helper(original)
print("Noisy: ")
show_image_helper(noisy)
print("Generated: ")
show_image_helper(generated)
test_noise(denoise_autoencoder)
###Output
Score: 269.54182307638195
Original:
|
ai-platform-unified/notebooks/official/model_monitoring/model_monitoring.ipynb | ###Markdown
Vertex Model Monitoring <a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
!pip3 install -q numpy
!pip3 install -q tensorflow==2.4.1 tensorflow_data_validation[visualization]
!pip3 install -q --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
!pip3 install -q google-cloud-aiplatform
!pip3 install -q --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex Model Monitoring Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertex AI* BigQueryLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Google Cloud Notebook requires dependencies to be installed with '--user'
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# Install Python package dependencies.
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib \
google-auth-httplib2 oauth2client requests \
google-cloud-aiplatform google-cloud-storage==1.32.0
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. Enter your project id in the first line of the cell below.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**Model monitoring is currently supported in regions us-central1, europe-west4, asia-east1, and asia-southeast1. To keep things simple for this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). You can use any supported region, so long as all resources are co-located.**
###Code
# Import globally needed dependencies here, after kernel restart.
import copy
import numpy as np
import os
import pprint as pp
import random
import sys
import time
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define utilitiesRun the following cells to define some utility functions and distributions used later in this notebook. Although these utilities are not critical to understand the main concepts, feel free to expand the cellsin this section if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility imports and constants
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
# This is the default value at which you would like the monitoring function to trigger an alert.
# In other words, this value fine tunes the alerting sensitivity. This threshold can be customized
# on a per feature basis but this is the global default setting.
DEFAULT_THRESHOLD_VALUE = 0.001
# @title Utility functions
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# @title Utility distributions
# This cell containers parameters enabling us to generate realistic test data that closely
# models the feature distributions found in the training data.
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
MODEL_ID = output[1].split("/")[-1]
if _exit_code == 0:
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
else:
print(f"Error creating model: {output}")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
if _exit_code == 0:
print("Endpoint created.")
else:
print(f"Error creating endpoint: {output}")
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[-1].split()[-1][:-1]
if _exit_code == 0:
print(
f"Model {MODEL_NAME}/{MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}."
)
else:
print(f"Error deploying model to endpoint: {output}")
###Output
_____no_output_____
###Markdown
If you already have a deployed endpointYou can reuse your existing endpoint by filling in the value of your endpoint ID in the next cell and running it. **If you've just deployed an endpoint in the previous cell, you should skip this step.**
###Code
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
if ENDPOINT_ID:
ENDPOINT = f"projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
print(f"Using endpoint {ENDPOINT}")
else:
print("If you want to reuse an existing endpoint, you must specify the endpoint id above.")
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. User email - The email address to which you would like monitoring alerts sent.1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - The time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - The prediction target column name in training dataset.1. Skew detection threshold - The skew threshold for each feature you want to monitor.1. Prediction drift threshold - The drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "[your-email-address]" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. To do this successfully, you need to specify your alerting thresholds (for both skew and drift), your training data source, and apply those settings to all deployed models on your new endpoint (of which there should only be one at this point).Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
# Set thresholds specifying alerting criteria for training/serving skew and create config object.
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
# Set thresholds specifying alerting criteria for serving drift and create config object.
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
# Specify training dataset source location (used for schema generation).
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
# Aggregate the above settings into a ModelMonitoringObjectiveConfig object and use
# that object to adjust the ModelDeploymentMonitoringObjectiveConfig object.
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
# Find all deployed model ids on the created endpoint and set objectives for each.
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_configs = set_objectives(model_ids, objective_template)
# Create the monitoring job for all deployed models on this endpoint.
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
out = !gcloud ai endpoints undeploy-model $ENDPOINT_ID --deployed-model-id $DEPLOYED_MODEL_ID
if _exit_code == 0:
print("Model undeployed.")
else:
print("Error undeploying model:", out)
out = !gcloud ai endpoints delete $ENDPOINT_ID --quiet
if _exit_code == 0:
print("Endpoint deleted.")
else:
print("Error deleting endpoint:", out)
out = !gcloud ai models delete $MODEL_ID --quiet
if _exit_code == 0:
print("Model deleted.")
else:
print("Error deleting model:", out)
###Output
_____no_output_____
###Markdown
Vertex Model Monitoring <a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization]
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring <a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
!pip3 install -q numpy
!pip3 install -q tensorflow==2.4.1 tensorflow_data_validation[visualization]
!pip3 install -q --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
!pip3 install -q google-cloud-aiplatform
!pip3 install -q --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex Model Monitoring Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Google Cloud Notebook requires dependencies to be installed with '--user'
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
# Install Python package dependencies.
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib \
google-auth-httplib2 oauth2client requests \
google-cloud-aiplatform google-cloud-storage==1.32.0
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. Enter your project id in the first line of the cell below.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**Model monitoring is currently supported in regions us-central1, europe-west4, asia-east1, and asia-southeast1. To keep things simple for this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). You can use any supported region, so long as all resources are co-located.**
###Code
# Import globally needed dependencies here, after kernel restart.
import copy
import numpy as np
import os
import pprint as pp
import random
import sys
import time
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define utilitiesRun the following cells to define some utility functions and distributions used later in this notebook. Although these utilities are not critical to understand the main concepts, feel free to expand the cellsin this section if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility imports and constants
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
# This is the default value at which you would like the monitoring function to trigger an alert.
# In other words, this value fine tunes the alerting sensitivity. This threshold can be customized
# on a per feature basis but this is the global default setting.
DEFAULT_THRESHOLD_VALUE = 0.001
# @title Utility functions
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# @title Utility distributions
# This cell containers parameters enabling us to generate realistic test data that closely
# models the feature distributions found in the training data.
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
MODEL_ID = output[1].split("/")[-1]
if _exit_code == 0:
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
else:
print(f"Error creating model: {output}")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
if _exit_code == 0:
print("Endpoint created.")
else:
print(f"Error creating endpoint: {output}")
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[-1].split()[-1][:-1]
if _exit_code == 0:
print(
f"Model {MODEL_NAME}/{MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}."
)
else:
print(f"Error deploying model to endpoint: {output}")
###Output
_____no_output_____
###Markdown
If you already have a deployed endpointYou can reuse your existing endpoint by filling in the value of your endpoint ID in the next cell and running it. **If you've just deployed an endpoint in the previous cell, you should skip this step.**
###Code
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
if ENDPOINT_ID:
ENDPOINT = f"projects/{PROJECT_ID}/locations/us-central1/endpoints/{ENDPOINT_ID}"
print(f"Using endpoint {ENDPOINT}")
else:
print("If you want to reuse an existing endpoint, you must specify the endpoint id above.")
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. User email - The email address to which you would like monitoring alerts sent.1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - The time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - The prediction target column name in training dataset.1. Skew detection threshold - The skew threshold for each feature you want to monitor.1. Prediction drift threshold - The drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "[your-email-address]" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. To do this successfully, you need to specify your alerting thresholds (for both skew and drift), your training data source, and apply those settings to all deployed models on your new endpoint (of which there should only be one at this point).Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
# Set thresholds specifying alerting criteria for training/serving skew and create config object.
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
# Set thresholds specifying alerting criteria for serving drift and create config object.
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
# Specify training dataset source location (used for schema generation).
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
# Aggregate the above settings into a ModelMonitoringObjectiveConfig object and use
# that object to adjust the ModelDeploymentMonitoringObjectiveConfig object.
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
# Find all deployed model ids on the created endpoint and set objectives for each.
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_configs = set_objectives(model_ids, objective_template)
# Create the monitoring job for all deployed models on this endpoint.
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
out = !gcloud ai endpoints undeploy-model $ENDPOINT_ID --deployed-model-id $DEPLOYED_MODEL_ID
if _exit_code == 0:
print("Model undeployed.")
else:
print("Error undeploying model:", out)
out = !gcloud ai endpoints delete $ENDPOINT_ID --quiet
if _exit_code == 0:
print("Endpoint deleted.")
else:
print("Error deleting endpoint:", out)
out = !gcloud ai models delete $MODEL_ID --quiet
if _exit_code == 0:
print("Model deleted.")
else:
print("Error deleting model:", out)
###Output
_____no_output_____
###Markdown
Vertex Model Monitoring <a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow==2.4.1 tensorflow_data_validation[visualization]
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_user_engagement:.5" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____
###Markdown
Vertex AI Model Monitoring <a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fai-platform-samples%2Fmaster%2Fai-platform-unified%2Fnotebooks%2Fofficial%2Fmodel_monitoring%2Fmodel_monitoring.ipynb" Open in GCP Notebooks Open in Colab View on GitHub Overview What is Model Monitoring?Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:* software versioning* rigorous deployment processes* event logging* alerting/notication of situations requiring intervention* on-demand and automated diagnostic tracing* automated performance and functional testingYou should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:* How well do recent service requests match the training data used to build your model? This is called **training-serving skew**.* How significantly are service requests evolving over time? This is called **drift detection**.If production traffic differs from training data, or varies substantially over time, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that **you can anticipate problems before they affect your customer experiences or your revenue streams**. ObjectiveIn this notebook, you will learn how to... * deploy a pre-trained model* configure model monitoring* generate some artificial traffic* understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature Costs This tutorial uses billable components of Google Cloud:* Vertext AI* BigQueryLearn about [Vertext AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. The example modelThe model you'll use in this notebook is based on [this blog post](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:- identity - unique player identitity numbers- demographic features - information about the player, such as the geographic region in which a player is located- behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level- churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project. Before you begin Setup your dependencies
###Code
import os
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow 2.4.1 and TensorFlow Data Validation (TFDV)")
!pip3 install -q numpy
!pip3 install -q tensorflow==2.4.1 tensorflow_data_validation[visualization]
!pip3 install -q --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
!pip3 install -q google-cloud-aiplatform
!pip3 install -q --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
import os
import random
import sys
import time
# Import required packages.
import numpy as np
###Output
_____no_output_____
###Markdown
Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. You'll use the *gcloud* command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your *gcloud* configuration settings.**For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.**
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Login to your Google Cloud account and enable AI services
###Code
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
###Output
_____no_output_____
###Markdown
Define some helper functionsRun the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
###Code
# @title Utility functions
import copy
import os
from google.cloud.aiplatform_v1beta1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1beta1.services.job_service import \
JobServiceClient
from google.cloud.aiplatform_v1beta1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1beta1.types.io import BigQuerySource
from google.cloud.aiplatform_v1beta1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1beta1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1beta1.types.prediction_service import \
PredictRequest
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
params = {}
params = json_format.ParseDict(params, Value())
request = PredictRequest(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = client.predict(request)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
###Output
_____no_output_____
###Markdown
Import your modelThe churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another. Run the next cell to import this model into your project. **If you've already imported your model, you can skip this step.**
###Code
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-4:latest"
ARTIFACT = "gs://mco-mm/churn"
output = !gcloud --quiet beta ai models upload --container-image-uri=$IMAGE --artifact-uri=$ARTIFACT --display-name=$MODEL_NAME --format="value(model)"
print("model output: ", output)
MODEL_ID = output[1].split("/")[-1]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
###Output
_____no_output_____
###Markdown
Deploy your endpointNow that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.Run the next cell to deploy your model to an endpoint. **This will take about ten minutes to complete. If you've already deployed a model to an endpoint, you can reuse your endpoint by running the cell after the next one.**
###Code
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
DEPLOYED_MODEL_ID = output[1].split()[-1][:-1]
print(
f"Model {MODEL_NAME}/{MODEL_ID}/{DEPLOYED_MODEL_ID} deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}/{ENDPOINT}."
)
# @title Run this cell only if you want to reuse an existing endpoint.
if not os.getenv("IS_TESTING"):
ENDPOINT_ID = "" # @param {type:"string"}
ENDPOINT = f"projects/mco-mm/locations/us-central1/endpoints/{ENDPOINT_ID}"
###Output
_____no_output_____
###Markdown
Run a prediction testNow that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON.**Try this now by running the next cell and examine the results.**
###Code
import pprint as pp
print(ENDPOINT)
print("request:")
pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("response")
pp.pprint(resp)
except Exception:
print("prediction request failed")
###Output
_____no_output_____
###Markdown
Taking a closer look at the results, we see the following elements:- **churned_values** - a set of possible values (0 and 1) for the target field- **churned_probs** - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)- **predicted_churn** - based on the probabilities, the predicted value of the target field (1)This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application. Start your monitoring jobNow that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML. Configure the following fields:1. Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.1. Monitor interval - the time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds).1. Target field - the prediction target column name in training dataset.1. Skew detection threshold - the skew threshold for each feature you want to monitor.1. Prediction drift threshold - the drift threshold for each feature you want to monitor.
###Code
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = (
"cnt_user_engagement:.5,cnt_level_start_quickplay:.5" # @param {type:"string"}
)
DRIFT_DEFAULT_THRESHOLDS = "country,language" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = (
"cnt_user_engagement:.5,cnt_level_start_quickplay:.5" # @param {type:"string"}
)
###Output
_____no_output_____
###Markdown
Create your monitoring jobThe following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
###Code
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
###Output
_____no_output_____
###Markdown
After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to take a look at your measurements in Cloud Storage.
###Code
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
###Output
_____no_output_____
###Markdown
You will notice the following components in these Cloud Storage paths:- **cloud-ai-platform-..** - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.- **[model_monitoring|instance_schemas]/job-..** - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification. - **instance_schemas/job-../analysis** - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).- **instance_schemas/job-../predict** - This is the first prediction made to your model after the current monitoring job was enabled.- **model_monitoring/job-../serving** - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.- **model_monitoring/job-../training** - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data. You can create monitoring jobs with other user interfacesIn the previous cells, you created a monitoring job using the Python client library. You can also use the *gcloud* command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function. Generate test data to trigger alertingNow you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. It takes about five minutes to run this cell and at least an hour to assess and report anamolies in skew or drift so after running this cell, feel free to proceed with the notebook and you'll see how to examine the resulting alert later.
###Code
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {"cnt_user_engagement": (lambda x: x * 3, lambda x: x / 3)}
perturb_cat = {"Japan": max(COUNTRY.values()) * 2}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
###Output
_____no_output_____
###Markdown
Interpret your resultsWhile waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience. Here's what a sample email alert looks like... This email is warning you that the *cnt_user_engagement*, *country* and *language* feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the *cnt_user_engagement* feature value is drifting significantly over time, again, as per your threshold specification. Monitoring results in the Cloud ConsoleYou can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities. Monitoring Status Monitoring Alerts Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
###Output
_____no_output_____ |
Case Studies/R/wage_data_analysis/analysis.ipynb | ###Markdown
Wage data analysis 1. Loading prerequisites 1.1 Libraries
###Code
suppressWarnings(
{
if(!require(ISLR)){
install.packages("ISLR")
}
if(!require(GGally)){
install.packages("GGally")
}
if(!require(dplyr)){
install.packages("dplyr")
}
if(!require(ggplot2)){
install.packages("ggplot2")
}
if(!require(caret)){
install.packages("caret")
}
library(ISLR)
library(dplyr)
library(ggplot2)
library(caret)
}
)
options(repr.plot.width=6, repr.plot.height=4)
###Output
Loading required package: ISLR
Loading required package: GGally
Loading required package: ggplot2
Registered S3 method overwritten by 'GGally':
method from
+.gg ggplot2
Loading required package: dplyr
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
Loading required package: caret
Loading required package: lattice
###Markdown
1.2 Data
###Code
data(Wage)
summary(Wage)
###Output
_____no_output_____
###Markdown
2. Modelling 2.1 Data splicing
###Code
inTrain = createDataPartition(y = Wage$wage, p = 0.75, list = F)
train = Wage[inTrain,]
test = Wage[-inTrain,]
dim(train); dim(test)
names(train)
###Output
_____no_output_____
###Markdown
2.2 Feature plot
###Code
featurePlot(x = train[,c("age","education","jobclass")], y = train$wage, plot = "pairs")
###Output
_____no_output_____
###Markdown
2.3 Wage analysis 2.3.1 Jitter analysis
###Code
g = ggplot(data = train)
###Output
_____no_output_____
###Markdown
2.3.2 jobClass
###Code
g + geom_point(aes(x = age, y = wage, fill = jobclass))
###Output
_____no_output_____
###Markdown
2.3.3 eduction
###Code
g + geom_point(aes(x = age, y = wage, fill = education)) + geom_smooth(formula = y~x, method = "lm")
###Output
ERROR while rich displaying an object: Error: stat_smooth requires the following missing aesthetics: x and y
Traceback:
1. FUN(X[[i]], ...)
2. tryCatch(withCallingHandlers({
. if (!mime %in% names(repr::mime2repr))
. stop("No repr_* for mimetype ", mime, " in repr::mime2repr")
. rpr <- repr::mime2repr[[mime]](obj)
. if (is.null(rpr))
. return(NULL)
. prepare_content(is.raw(rpr), rpr)
. }, error = error_handler), error = outer_handler)
3. tryCatchList(expr, classes, parentenv, handlers)
4. tryCatchOne(expr, names, parentenv, handlers[[1L]])
5. doTryCatch(return(expr), name, parentenv, handler)
6. withCallingHandlers({
. if (!mime %in% names(repr::mime2repr))
. stop("No repr_* for mimetype ", mime, " in repr::mime2repr")
. rpr <- repr::mime2repr[[mime]](obj)
. if (is.null(rpr))
. return(NULL)
. prepare_content(is.raw(rpr), rpr)
. }, error = error_handler)
7. repr::mime2repr[[mime]](obj)
8. repr_text.default(obj)
9. paste(capture.output(print(obj)), collapse = "\n")
10. capture.output(print(obj))
11. evalVis(expr)
12. withVisible(eval(expr, pf))
13. eval(expr, pf)
14. eval(expr, pf)
15. print(obj)
16. print.ggplot(obj)
17. ggplot_build(x)
18. ggplot_build.ggplot(x)
19. by_layer(function(l, d) l$compute_statistic(d, layout))
20. f(l = layers[[i]], d = data[[i]])
21. l$compute_statistic(d, layout)
22. f(..., self = self)
23. self$stat$compute_layer(data, params, layout)
24. f(..., self = self)
25. check_required_aesthetics(self$required_aes, c(names(data), names(params)),
. snake_class(self))
26. abort(glue("{name} requires the following missing aesthetics: ",
. glue_collapse(lapply(missing_aes, glue_collapse, sep = ", ",
. last = " and "), sep = " or ")))
27. signal_abort(cnd)
###Markdown
2.3.4 binning wageCreating discrete factors from the wage parameter for analysis
###Code
cutWage = cut2(train$wage, g = 3)
table(cutWage)
p1 = g + geom_boxplot(aes(x = cutWage, y = age, fill = cutWage))
p2 = g + geom_boxplot(aes(x = cutWage, y = age, fill = cutWage)) + geom_jitter(aes(x = cutWage, y = age, fill = cutWage))
grid.arrange(p1, p2, ncol = 2)
table(cutWage, train$jobClass)
###Output
_____no_output_____
###Markdown
2.3.5 jobClass wage density plot
###Code
g + geom_density(aes(x = wage, fill = eduction))
###Output
_____no_output_____ |
Pandas/DataFrames Part-1 in PANDAS.ipynb | ###Markdown
DATAFRAMES They are the main tools for using pandas Introduction to PandasIn this section of the course we will learn how to use pandas for data analysis. You can think of pandas as an extremely powerful version of Excel, with a lot more features. In this section of the course, you should go through the notebooks in this order:* Introduction to Pandas* Series* DataFrames* Missing Data* GroupBy* Merging,Joining,and Concatenating* Operations* Data Input and Output
###Code
import numpy as np
import pandas as pd
from numpy.random import randn
from numpy.random import randint
np.random.seed(101) # For getting same random numbers
###Output
_____no_output_____
###Markdown
Using randint in the Dataframe
###Code
randint(0,20,20).reshape(5,4)
###Output
_____no_output_____
###Markdown
A nice DataFrame for the above array with 'randint'A,B,C,D,E are for the rows and W,X,Y,Z for the columns
###Code
pd.DataFrame(randint(0,20,20).reshape(5,4),['A','B','C','D','E'],['W','X','Y','Z'])
###Output
_____no_output_____
###Markdown
Using randn in the Dataframe
###Code
randn(5,4)
###Output
_____no_output_____
###Markdown
A nice DataFrame for the above array with 'randn'A,B,C,D,E are for the rows and W,X,Y,Z for the columns
###Code
FRAME = pd.DataFrame(randn(5,4),['A','B','C','D','E'],['W','X','Y','Z'])
FRAME
###Output
_____no_output_____
###Markdown
GRABBING COLUMNS FROM THE DATAFRAMES
###Code
print(f"{FRAME['W']}\n") # for getting the row from "W" row
print(f"{FRAME['X']}\n") # for getting the row from "X" row
print(f"{FRAME['Y']}\n") # for getting the row from "Y" row
print(f"{FRAME['Z']}\n") # for getting the row from "Z" row
FRAME[['W','Y']] # Grabbing multiple rows
###Output
_____no_output_____
###Markdown
MEATHOD 1 OF GRABBING THE DATA recommanded meathod
###Code
print(f"{FRAME.W}\n") # for getting the row from "W" row
print(f"{FRAME.X}\n") # for getting the row from "X" row
print(f"{FRAME.Y}\n") # for getting the row from "Y" row
print(f"{FRAME.Z}\n") # for getting the row from "Z" row
###Output
A 0.000323
B -2.711192
C 1.131029
D 1.852155
E 1.652070
Name: W, dtype: float64
A -1.049912
B -1.522410
C 0.498600
D -0.892172
E 0.196925
Name: X, dtype: float64
A -0.229937
B -1.416207
C 1.179022
D 0.616861
E 0.103214
Name: Y, dtype: float64
A -1.283599
B 1.108544
C 1.322684
D -1.121822
E -0.147370
Name: Z, dtype: float64
###Markdown
MEATHOD 2 OF GRABBING THE DATA NEW COLUMN making
###Code
# Making of a new column in "FRAME" as 'new'
FRAME['new'] = FRAME['W'] + FRAME['X']
FRAME
df = pd.DataFrame(randint(0,20,20).reshape(5,4),['A','B','C','D','E'],['W','X','Y','Z'])
df # for further use
#Adding new column as "addition" of all the rows
df['addition'] = df['W'] + df['X'] + df['Y'] + df['Z']
df
###Output
_____no_output_____
###Markdown
Dropping a Column
###Code
FRAME
df
FRAME.drop('new', axis=1, inplace=False) # droppin that 'new' in FRAME
# Note that this dosn't do anything with original FRAME
# Here inplace is said to false as default
FRAME # doesnt changed anything in this
# Here inplace is set to true and done in the main 'FRAME' too
FRAME.drop('new', axis=1, inplace=True) # dropping column
FRAME
###Output
_____no_output_____
###Markdown
Dropping a Row
###Code
# axis=0 is a row but it is default if 'E' is a row
FRAME.drop('E', axis=0)
FRAME # not changed in the main FRAME
# Dropping a row in the main FRAME too
FRAME.drop('E', axis=0, inplace=True)
FRAME
# Note that rows represent at axis=0, columns at axis=1
FRAME.shape
###Output
_____no_output_____
###Markdown
This will tell the shape of a dataframe GRABBING ROWS FROM THE DATAFRAMES
###Code
df
###Output
_____no_output_____
###Markdown
MEATHOD 1
###Code
ROW_A = df.loc['A'] # row "A"
ROW_B = df.loc['B'] # row "B"
ROW_C = df.loc['C'] # row "C"
ROW_D = df.loc['D'] # row "D"
ROW_E = df.loc['E'] # row "E"
print(f'\n Row A: \n\n {ROW_A}')
print(f'\n Row B: \n\n {ROW_B}')
print(f'\n Row C: \n\n {ROW_C}')
print(f'\n Row D: \n\n {ROW_D}')
print(f'\n Row E: \n\n {ROW_E}')
###Output
Row A:
W 17
X 11
Y 4
Z 7
addition 39
Name: A, dtype: int32
Row B:
W 11
X 5
Y 17
Z 18
addition 51
Name: B, dtype: int32
Row C:
W 13
X 6
Y 3
Z 14
addition 36
Name: C, dtype: int32
Row D:
W 17
X 13
Y 15
Z 17
addition 62
Name: D, dtype: int32
Row E:
W 0
X 15
Y 18
Z 11
addition 44
Name: E, dtype: int32
###Markdown
MEATHOD 2BY PASSING THE INDEX OF THAT ROW YOU WANT
###Code
row_A = df.iloc[0] # Here it is 'A'
row_B = df.iloc[1] # Here it is 'B'
row_C = df.iloc[2] # Here it is 'C'
row_D = df.iloc[3] # Here it is 'D'
row_E = df.iloc[4] # Here it is 'E'
print(f'\n Row A: \n\n {row_A}')
print(f'\n Row B: \n\n {row_B}')
print(f'\n Row C: \n\n {row_C}')
print(f'\n Row D: \n\n {row_D}')
print(f'\n Row E: \n\n {row_E}')
###Output
Row A:
W 17
X 11
Y 4
Z 7
addition 39
Name: A, dtype: int32
Row B:
W 11
X 5
Y 17
Z 18
addition 51
Name: B, dtype: int32
Row C:
W 13
X 6
Y 3
Z 14
addition 36
Name: C, dtype: int32
Row D:
W 17
X 13
Y 15
Z 17
addition 62
Name: D, dtype: int32
Row E:
W 0
X 15
Y 18
Z 11
addition 44
Name: E, dtype: int32
###Markdown
Grabbing information from the DATASET
###Code
df
# row - B, col - Y
df.loc['B','Y']
df.loc[['A','C'], ['W','Y']] # Multiple data from the dataset
###Output
_____no_output_____ |
DataExploration.ipynb | ###Markdown
DICOM sample visualization
###Code
import matplotlib.pyplot as plt
import pydicom
from pydicom.data import get_testdata_files
print(__doc__)
# filename = get_testdata_files('CT_small.dcm')[0]
filename = "dataset/Image-11.dcm"
dataset = pydicom.dcmread(filename)
# Normal mode:
print()
print("Filename.........:", filename)
print("Storage type.....:", dataset.SOPClassUID)
print()
pat_name = dataset.PatientName
display_name = pat_name.family_name + ", " + pat_name.given_name
print("Patient's name...:", display_name)
print("Patient id.......:", dataset.PatientID)
print("Modality.........:", dataset.Modality)
# print("Study Date.......:", dataset.StudyDate)
if 'PixelData' in dataset:
rows = int(dataset.Rows)
cols = int(dataset.Columns)
print("Image size.......: {rows:d} x {cols:d}, {size:d} bytes".format(
rows=rows, cols=cols, size=len(dataset.PixelData)))
if 'PixelSpacing' in dataset:
print("Pixel spacing....:", dataset.PixelSpacing)
# use .get() if not sure the item exists, and want a default value if missing
print("Slice location...:", dataset.get('SliceLocation', "(missing)"))
# plot the image using matplotlib
plt.imshow(dataset.pixel_array, cmap=plt.cm.bone)
plt.show()
###Output
Automatically created module for IPython interactive environment
Filename.........: dataset/Image-11.dcm
Storage type.....: 1.2.840.10008.5.1.4.1.1.4
Patient's name...: 00009,
Patient id.......: 00009
Modality.........: MR
Image size.......: 512 x 512, 524288 bytes
Pixel spacing....: [0.468800008296967, 0.468800008296967]
Slice location...: 30.56045723
###Markdown
Papers**[ 1 ] MRI-Based Deep-Learning Method for Determining Glioma MGMT Promoter Methylation Status** * uses intensity normalization with ANTs **[ 2 ] The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification****[ 3 ] Automatic Prediction of MGMT Status in Glioblastoma via Deep Learning-Based MR Image Analysis** * uses intensity normalization * data augmentation (image rotations) * performance of segmentation task evaluated with DICE Score Intensity normalization for MR images:* https://github.com/ANTsX/ANTs - **Advanced Normalization Tools*** https://github.com/jcreinhold/intensity-normalization - **Intensity Normalization Package** (see *Recommendation on where to start*) Motivation for Intensity Normalization:Intensity normalization is an important pre-processing step in many image processing applications regarding MR images since MR images have an inconsistent intensity scale across (and within) sites and scanners due to, e.g.,:the use of different equipment,different pulse sequences and scan parameters,and a different environment in which the machine is located.Importantly, the inconsistency in intensities isn't a feature of the data (unless you want to classify the scanner/site from which an image came)—it's an artifact of the acquisition process. The inconsistency causes a problem with machine learning-based image processing methods, which usually assume the data was gathered iid from some distribution. Experiments for MONAI dataset generation
###Code
import os
import pprint as pp
modalities = ["t1ce", "t1", "t2", "flair"]
def get_datalist_dict(path):
datalist = []
for i, (dirpath, dirnames, filenames) in enumerate(os.walk(path)):
# What a mess because of not adding this S$%^
temp_dict=dict()
temp_dict["image"] = []
# skip .DS_Store
if i <= 2:
continue
for modality in modalities:
temp_path = f"{dirpath}/{os.path.split(dirpath)[1]}"
temp_dict["image"].append(f"{temp_path}_{modality}.nii.gz")
temp_dict["label"] = f"{dirpath}_seg.nii.gz"
datalist.append(temp_dict)
# if i == 5:
# break
del temp_dict
return datalist
path = "/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/"
dl = get_datalist_dict(path)
len(dl)
pp.pprint(dl[123])
for i, (dirpath, dirnames, filenames) in enumerate(os.walk(path)):
if i < 1:
continue
if i == 4:
break
print(f"i={i}")
print(f"dirpath = {dirpath}/{os.path.split(dirpath)[1]}")
# print(f"{os.path.split(dirpath)[1]}")
# print(f"dirnames = {dirnames}")
# print(f"filenames = {filenames}")
###Output
i=1
dirpath = /home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_01247/BraTS2021_01247
i=2
dirpath = /home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_01255/BraTS2021_01255
i=3
dirpath = /home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00406/BraTS2021_00406
###Markdown
NIFTI experiments
###Code
%pip install nibabel
import nibabel as nib
import matplotlib.pyplot as plt
import random
%matplotlib inline
image_path = '/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_t1ce.nii.gz'
t1ce_img = nib.load(image_path)
image_path = '/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_t1.nii.gz'
t1_img = nib.load(image_path)
image_path = '/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_t2.nii.gz'
t2_img = nib.load(image_path)
image_path = '/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_flair.nii.gz'
flair_img = nib.load(image_path)
image_path = '/home/advo/dev/kaggle/RSNA-MICCAI-2021/dataset/task1/BraTS2021_Training_Data/BraTS2021_00000/BraTS2021_00000_seg.nii.gz'
seg_img = nib.load(image_path)
t1_hdr = t1_img.header
# print(t1_hdr)
t1_data = t1_img.get_fdata()
t1ce_data = t1ce_img.get_fdata()
t2_data = t2_img.get_fdata()
flair_data = flair_img.get_fdata()
seg_data = seg_img.get_fdata()
all_modalities = [t1_data, t1ce_data, t2_data, flair_data, seg_data]
mod_names = ["t1_data", "t1ce_data", "t2_data", "flair_data", "seg_data"]
SLICE_x = random.randrange(start=0, stop=t1_data.shape[0])
SLICE_y = random.randrange(start=0, stop=t1_data.shape[1])
SLICE_z = random.randrange(start=0, stop=t1_data.shape[2])
x_slice = list()
y_slice = list()
z_slice = list()
for modality in all_modalities:
x_slice.append(modality[SLICE_x, :, :])
y_slice.append(modality[:, SLICE_y, :])
z_slice.append(modality[:, :, SLICE_z])
slices = [x_slice, y_slice, z_slice]
for j, modality in enumerate(all_modalities):
fig, axes = plt.subplots(1, 3, figsize=(15,15))
plt.title(mod_names[j])
axes[0].imshow(x_slice[j].T, cmap="gray", origin="lower")
axes[1].imshow(y_slice[j].T, cmap="gray", origin="lower")
axes[2].imshow(z_slice[j].T, cmap="gray", origin="lower")
###Output
_____no_output_____
###Markdown
Compare Ngrams
###Code
# Get Bigrams for each class
pos_bigram_counts = count_ngrams(pos['post'])
neg_bigram_counts = count_ngrams(neg['post'])
# sort counts
pos_bigram_counts = sort_ngram_count_results(pos_bigram_counts,n=200).iloc[25:]
neg_bigram_counts = sort_ngram_count_results(neg_bigram_counts,n=200).iloc[25:]
ngram_comparisson = compare_counts([pos_bigram_counts,neg_bigram_counts])
ngram_comparisson.head(25).iloc[1:]
plt.figure(figsize=(8,6),dpi=100)
plt.hist(pos.user_freq, label='positive', alpha=0.7)
plt.hist(neg.user_freq, label='negative', alpha =0.7)
plt.legend()
plt.ylabel('# Users with post total')
plt.xlabel('Post total')
plt.title('Number of posts by user')
plt.savefig('./plots/user_freq')
# most common subreddits in dataset
Counter(data.subreddit).most_common(20)
user1 = train[(train['label']==1) & (train['user_id']==4132)]['post'].values
user2 = pos[pos.user_id==28983]['post'].values
user4132[9]
user2[4]
###Output
_____no_output_____
###Markdown
Data Exploration and AnalysisI will import data of the three cameras with distinc perspective among them. So, I am going to import each subset and analize its features. In the next cell I am going to import the data given by Udacity mixed with others that I recorded which is contained in [Here]().
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
dataset = pd.read_csv('data/data.csv', usecols=[0, 1, 2, 3])
# visualize the first and last 5 elements of the data.
dataset.head()
dataset.tail()
print(dataset.keys())
###Output
Index(['center', 'left', 'right', 'steering'], dtype='object')
###Markdown
To the purpose of this project, I will focus on the `steering` gotten respect to each image. Furthermore, the simulating car has three cameras. Each one ubicated in the front of the car but with a little distance among them so, each image captured has a different perspective of the road. Getting more dataThe data was obtained by defferent ways of driving. It means, one record is a simulation centered on keeping the car in the middle of the lane lines. Other consists of the same method before mentioned but the track was driven in the opposite direction. Alse, there are scenes where the car is falling off the track so it must come back the middle. Finally, one part are just curves of the track and the record on was made slower.Moreover, the simulation has two different tracks. The first one is a simple circular racetrack. Nevertheless, the second one is harder than the first one because of the track was made in a mountain place so, the track has steeper road and it is a winding track.
###Code
from urllib.request import urlretrieve
import os
import numpy as np
from sklearn.preprocessing import LabelBinarizer
from zipfile import ZipFile
def download_uncompress(url, file, name):
if not os.path.isfile(file):
print("Download file... " + file + " ...")
urlretrieve(url,file)
print("File downloaded")
if(os.path.isdir(file)):
print('Data extracted')
else:
with ZipFile(dir) as zipf:
zipf.extractall(name)
#the dataset provided by udacity
#download('----------','remain_data.zip','remain_data')
###Output
_____no_output_____
###Markdown
Histogram
###Code
# the first data (data.zip) correspond to making a lap in the easiest track but focusing on keeping the car on the middle
#plt.figure(figsize)
#hist1 = df['steering']
dataset['steering'].plot.hist( bins = 20, align='mid', color = 'blue', edgecolor = 'black')
plt.title('Histogram "Middle Steering" Data')
plt.xlabel('steering')
plt.ylabel('counts')
plt.grid(axis='y', alpha=0.5)
plt.savefig('README_IMAGES/histogram1.jpg', transparent= False, bbox_inches='tight', pad_inches=0)
plt.show()
plt.close()
print('size of data:', dataset['steering'].count()*3,' images')
###Output
_____no_output_____
###Markdown
How you can see the above histograms, there are bias in each one due to the steering was zero in most the images. This is expected since the goal is keeping the car between the lane lines. But this biased distribution conduce getting wrong accuracy of the CNN. So I am going to reduce the quantity of the zero-steering images in order to get better-distributed data set. Reducing Bias and Augmenting DataNotice that each line of Pandas frame has three images which correspond to left, center and right camera on the car.
###Code
dataset = dataset[dataset['steering'] !=0].append(dataset[dataset['steering'] == 0].sample(frac=0.5))
'''I am going to save the max index that has the first dataset since this has the different root of images
respect to the others images dataset'''
#dataset.to_csv(r'./data/remain_data.csv',index= None, header=True)
print('size of total data:', dataset['steering'].count()*3,' images\n')
dataset['steering'].plot.hist( bins = 20, align='mid', color = 'blue', edgecolor = 'black')
plt.title('Histogram "Middle Steering" Data')
plt.xlabel('steering')
plt.ylabel('counts')
plt.grid(axis='y', alpha=0.5)
plt.savefig('README_IMAGES/histogramTotal.jpg', transparent= False, bbox_inches='tight', pad_inches=0)
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Now, you can see that the distribution of data is less biased.
###Code
import cv2
#this function flip out image passed by parameter
def flip_image(img):
out = cv2.flip(img,1)
return out
#this function change the brightness image passed by parameter
#the V in HSV for Value: the perception of the ammount of light or power of the source.
def brightness_change(img):
out = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
out = np.array(out, dtype = np.float64)
out[:,:,2] = out[:,:,2]*(.25+np.random.random())
out = np.uint8(out)
out_f = cv2.cvtColor(out, cv2.COLOR_HSV2RGB)
return out_f
#correct steering to left and right image
def steering(position,steering):
if position == 'left':
steering = steering+0.25
else:
steering = steering-0.25
return steering
###Output
_____no_output_____
###Markdown
Visualize data and augmented dataHere, I will show you six images and its corresponding variations.
###Code
from matplotlib.image import imread
# Read data
sample = []
label = []
# get 3 images path from firts dataset
for index, row in dataset.iterrows():
if index == 0:
row[0] = row[0].split('/')[-1]
row[1] = row[1].split('/')[-1]
row[2] = row[2].split('/')[-1]
sample.append(row[0])
label.append(row[3])
sample.append(row[1])
label.append(steering('left',row[3]))
sample.append(row[2])
label.append(steering('right',row[3]))
#save label to brightness image
label.append(row[3])
label.append(steering('left',row[3]))
label.append(steering('right',row[3]))
#save label to flip image
label.append(row[3]*-1)
label.append(steering('left',row[3])*-1)
label.append(steering('right',row[3])*-1)
else:
break
sample_temp = np.array(sample)
sample_temp = np.hstack((sample_temp,sample_temp, sample_temp))
example_test = np.column_stack((sample_temp ,np.array(label)))
print(pd.DataFrame(example_test), '\n')
# get images
name = './data/IMG/'
images_set_orig = []
images_set_flip = []
images_set_brightness = []
con_lbl = 0
for row in sample:
img = imread(name+row)
images_set_orig.append(img)
img2 = brightness_change(img)
images_set_brightness.append(img2)
flip_img = flip_image(img)
images_set_flip.append(flip_img)
print(pd.DataFrame(label))
images_set_orig = np.array(images_set_orig)
images_set_brightness = np.array(images_set_brightness)
images_set_flip = np.array(images_set_flip)
cont1 = 0
cont2 = 0
cont3 = 0
lbl_title = 0
f, ax = plt.subplots(3,3,figsize=(12,8))
f.subplots_adjust(top = 0.99, bottom=0.01, hspace=1.5, wspace=0.4)
f.suptitle('Data Augmentation', fontsize = 16)
f.subplots_adjust(top=0.90)
for row in range(3):
for col in range(3):
if row == 0:
ax[row, col].imshow(images_set_orig[cont1])
cont1 += 1
elif row == 1:
ax[row, col].imshow(images_set_brightness[cont2])
cont2 += 1
else:
ax[row, col].imshow(images_set_flip[cont3])
cont3 += 1
if col == 0:
ax[row,col].set_title('Center camera %.2f' % label[lbl_title])
elif col == 1:
ax[row,col].set_title('Left camera %.2f' % label[lbl_title])
else:
ax[row,col].set_title('Right camera %.2f' % label[lbl_title])
lbl_title += 1
f.savefig('README_IMAGES/DataAug.jpg', transparent= False, bbox_inches='tight', pad_inches=0)
###Output
_____no_output_____
###Markdown
Ibovespa forecasting using neural networks Machine Learning Engineer Nanodegree - Capstone Proposal Import python packages
###Code
import os
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from ibovespa.utils import load_config
from ibovespa.data_collection import collect_data
from ibovespa.data_preparation import prepare_data
###Output
_____no_output_____
###Markdown
Load Configurations
###Code
config = load_config()
###Output
_____no_output_____
###Markdown
Data Collection
###Code
period = config["data_collection"]["period"]
stocks = config["data_collection"]["stocks"]
raw_data = collect_data(stocks=stocks, data_size=period)
raw_data.tail()
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
test_split = config["data_preparation"]["split_size"]["test"]
valid_split = config["data_preparation"]["split_size"]["validation"]
clean_data = prepare_data(raw_data, split=test_split, split_valid=valid_split)
###Output
_____no_output_____
###Markdown
Data Exploration It is important to evaluate data and get insights only form the train dataset. Otherwise, we will have a data leakage even before any model training.
###Code
explore_data = clean_data[clean_data["group"]=="train"]
# Calendar Variables
calendar_variables = pd.get_dummies(pd.DatetimeIndex(explore_data['date']).weekday, prefix="weekday")
explore_data = pd.concat([explore_data, calendar_variables], axis = 1)
explore_data[["weekday"]] = pd.DatetimeIndex(explore_data['date']).weekday
numeric_columns = ['IBOV', 'ITUB4', 'BBDC4', 'VALE3', 'PETR4', 'PETR3', 'ABEV3', 'BBAS3', 'B3SA3', 'ITSA4']
stocks_diff = explore_data[numeric_columns].pct_change().reset_index(drop=True)
stocks_diff.columns = ["diff_" + column for column in stocks_diff.columns]
complete_explore_data = pd.concat([explore_data, stocks_diff], axis=1).iloc[1:].reset_index(drop=True)
complete_explore_data.head()
###Output
_____no_output_____
###Markdown
Weekday Boxplots
###Code
f, ax = plt.subplots(figsize=(13.7, 5.5))
sns.boxplot(y="weekday", x="diff_IBOV", data=complete_explore_data, orient="h", ax=ax)
sns.swarmplot(x="diff_IBOV", y="weekday", orient="h", data=complete_explore_data, color=".25", ax=ax)
plt.axvline(0, 0,1, ls="--", color="gray")
sns.displot(y="weekday", x="diff_IBOV", data=complete_explore_data)
###Output
_____no_output_____
###Markdown
Correlations
###Code
last_day_diff = complete_explore_data.iloc[:-1][['diff_ITUB4', 'diff_BBDC4', 'diff_VALE3', 'diff_PETR4', 'diff_PETR3',
'diff_ABEV3', 'diff_BBAS3', 'diff_B3SA3', 'diff_ITSA4']].reset_index(drop=True)
today_diff_close = complete_explore_data.iloc[1:][["diff_IBOV"]].reset_index(drop=True)
diff_evaluation = pd.concat([today_diff_close, last_day_diff], axis=1)
diff_evaluation.corr(method="spearman")
sns.pairplot(diff_evaluation, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})
###Output
_____no_output_____
###Markdown
**The notebook contains preprocessing of the data and visualization of few features of the dataset**
###Code
# Import the libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#import and read dataset
dataset = pd.read_csv("credit_card_defaults.csv")
# recap of data
dataset.head()
#Stats of features/variables in dataset
dataset.info()
# stats of numerical variables
dataset.describe().transpose()
#Check for null values
dataset.isnull().sum()
# Analyze the variables/features
dataset["LIMIT_BAL"].describe()
dataset["SEX"].value_counts()
dataset["EDUCATION"].value_counts()
dataset["MARRIAGE"].value_counts()
dataset["AGE"].describe()
# Analyze the repayment history
pay0 = dataset["PAY_0"].value_counts()
pay2 = dataset["PAY_2"].value_counts()
pay3 = dataset["PAY_3"].value_counts()
pay4 = dataset["PAY_4"].value_counts()
pay5 = dataset["PAY_5"].value_counts()
pay6 = dataset["PAY_6"].value_counts()
print("PAY0: \n",pay0,"\nPAY2: \n",pay2,"\nPAY3: \n",pay3,"\nPAY4: \n",pay4,"\nPAY5: \n",pay5,"\nPAY6: \n",pay6)
###Output
PAY0:
0 14737
-1 5686
1 3688
-2 2759
2 2667
3 322
4 76
5 26
8 19
6 11
7 9
Name: PAY_0, dtype: int64
PAY2:
0 15730
-1 6050
2 3927
-2 3782
3 326
4 99
1 28
5 25
7 20
6 12
8 1
Name: PAY_2, dtype: int64
PAY3:
0 15764
-1 5938
-2 4085
2 3819
3 240
4 76
7 27
6 23
5 21
1 4
8 3
Name: PAY_3, dtype: int64
PAY4:
0 16455
-1 5687
-2 4348
2 3159
3 180
4 69
7 58
5 35
6 5
8 2
1 2
Name: PAY_4, dtype: int64
PAY5:
0 16947
-1 5539
-2 4546
2 2626
3 178
4 84
7 58
5 17
6 4
8 1
Name: PAY_5, dtype: int64
PAY6:
0 16286
-1 5740
-2 4895
2 2766
3 184
4 49
7 46
6 19
5 13
8 2
Name: PAY_6, dtype: int64
###Markdown
**Analysis of Variables using Visualization**
###Code
#Distribution of balance limit
dataset["LIMIT_BAL"].plot(kind='hist', color='green', bins=60)
dataset.groupby('default payment next month').size().plot(kind='bar', color='blue')
plt.xlabel('default payment next month')
plt.ylabel('count')
dataset['SEX'].value_counts().plot(kind='bar', color='orange')
plt.xlabel("Gender")
plt.ylabel("Count")
# Count of Marriage levels
#By matplotlib
#dataset['MARRIAGE'].value_counts().plot(kind='bar', color='red')
#dataset.groupby('MARRIAGE').size().plot(kind='bar',color='purple')
#by Seaborn
sns.countplot(x='MARRIAGE', data=dataset, palette='BuGn_r')
#default payment vs limit balance wrt gender
sns.barplot(x='default payment next month', y='LIMIT_BAL',hue='SEX',data=dataset,palette='Blues')
#Count of Education levels wrt deault payments
sns.countplot(x='EDUCATION', hue='default payment next month', data=dataset, palette='hls')
# marriage level vs default payment
sns.countplot(x='MARRIAGE', hue='default payment next month', data=dataset, palette='gist_rainbow_r')
###Output
_____no_output_____
###Markdown
Let's explore, now that we have a Spotify class
###Code
from urllib.parse import urlencode
spotify = SpotifyAPI(client_id, client_secret)
spotify.perform_auth()
spotify.access_token
#spotify search
headers = {
'Authorization': f'Bearer {spotify.access_token}'
}
endpoint = 'https://api.spotify.com/v1/search'
data = urlencode({'q': 'Time', 'type': 'track'})
lookup_url = f'{endpoint}?{data}'
r = requests.get(lookup_url, headers=headers)
print(r.status_code)
the_villain_full = spotify.search('MF DOOM')
the_villain = spotify.get_resource('2pAWfrd7WFF3XhVt9GooDL',resource_type='artists')
the_villain['popularity']
###Output
_____no_output_____
###Markdown
Or, don't reinvent the wheel and use an existing wrapper
###Code
!pip install tekore
import tekore as tk
cred = tk.Credentials(client_id,client_secret)
app_token = cred.request_client_token()
spotify = tk.Spotify(app_token)
spotify.album()
###Output
_____no_output_____
###Markdown
Final Project - Beast Team. Idea : Pandemic has driven people more towards the suburbs causing a surge in home prices. Naturally, this implies a new supply is going to come about (we've seen that in 2008). In this project, we create two TS-ML models - one that uses historical home prices to predict supply and one that uses historical supply & current interest rates to predict home prices. We will then utilize the former to predict the supply for major cities; we will then feed those predictions into the latter to predict home prices under various interest regimes for the same major cities. $$s_{jt}=f(X_j,X_t,s_{jt-1},...,s_{jt-k},p_{jt-1},...,p_{jt-k} )+\epsilon_{jt}$$$s_{jt}$ is Supply for county $j$ at time $t$.$X_j$ are demographic features for county $j$, example: population of a county, income per capita.$X_t$ are time specific features, example: Summer fixed effect [indicator variable to represent summer and capture the regime difference of increased supply] etc. $p_jt$ is home value index for county $j$ at time $t$. We will only consider single family homes for this analysis. [Add a note on calculation of home value index]$$p_{jt}=g(s_{jt},s_{jt-1},...,s_{jt-k},r_t,X_j,X_t,p_{jt-1},...,p_{jt-k} )+\nu_{jt}$$$r_t$ is the interest rate at time $t$.Here we have considered even the historical supply and historical prices since at a previous point, there may have been over supply of homes. Goal: Estimation of $\hat{f} $ and $\hat{g}$ using ML. Data cleaning and preparation steps. 1) Combine multiple census data in one file. [ETA - 06/19 - EOD PDT] - Each county may have data retrieved for census at differnt points, we will take the latest data for modeling. 2) Get two letter code mapping for states. [Done]3) Filter out only 4 weeks data from home_data. [Done]4) Split region data in home_data and join on two letter data, state-county. [Done]5) Get month from 4 week data and combine with interest rate data. [Done] TODO : Visualizations to build. 1) Geographically increase in prices and supply side by side heat map. 2) Scatter plot of 2020 latest price on Y, and - Population - Income/Poverty - Education - Percent of adults with less than a high school diploma, 2015-19 - Percent of adults with a high school diploma only, 2015-19 - Percent of adults completing some college or associate's degree, 2015-19 - Percent of adults with a bachelor's degree or higher, 2015-19) TODO : Basic modelling. 1) Select demographic features using LASSO, Median home prices 2017 vs RHS [All demographics]
###Code
import pandas as pd
data_folder = "C:\\Users\\spars\\Documents\\Master\\JHU\TML\\HomePriceBeastNew\\"
raw_home_data = pd.read_csv(f"{data_folder}weekly_housing_market_data_most_recent.tsv", delimiter="\t")
merged_census_county_data = pd.read_csv(f"{data_folder}merged_census_county_data.csv", low_memory=False)
interest_data = pd.read_csv(f"{data_folder}fed_funds.csv")
interest_data["DATE"] = pd.to_datetime(interest_data["DATE"])
state_mapping_data = pd.read_csv(f"{data_folder}lettercodestatemapping.csv")
state_mapping_data = state_mapping_data[['State', 'Code']]
def combine_datasets(merged_census_county_data, home_data, interest_data):
merged_home_data = pd.merge(
home_data,
interest_data,
how="inner",
left_on="period_month_year",
right_on="DATE",
right_index=False)
merged_home_data = pd.merge(
merged_home_data,
merged_census_county_data,
how="inner",
right_index=False)
return merged_home_data
def clean_home_data(raw_home_data):
home_data = raw_home_data
home_data.drop(home_data[home_data["region_type"]!='county'].index, inplace = True)
home_data["county_name"] = home_data["region_name"].apply(lambda x: x.split(',')[0])
home_data["state_code"] = home_data["region_name"].apply(lambda x: x.split(', ')[1])
home_data['period_begin'] = pd.to_datetime(home_data['period_begin'])
home_data['period_end'] = pd.to_datetime(home_data['period_end'])
home_data['period_diff'] = home_data['period_end'] - home_data['period_begin']
home_data['period_diff'] = home_data['period_diff'].apply(lambda x : x.days)
home_data.drop(home_data[home_data['period_diff']!=27].index, inplace = True)
home_data["period_month_year"] = pd.to_datetime( home_data["period_begin"].dt.year.astype(str) + '-' + home_data["period_begin"].dt.month.astype(str) + '-1')
return home_data
home_data = clean_home_data(raw_home_data)
combined_home_data = combine_datasets(merged_census_county_data,
home_data,
interest_data)
ignore_cols = ['Unnamed: 165', 'duration', 'last_updated',
'region_type', 'region_name', 'region_type_id',
'period_diff', 'DATE']
combined_home_data = combined_home_data[[x for x in \
combined_home_data.columns \
if x not in ignore_cols]]
combined_home_data.to_csv(f"{data_folder}combined_home_data.csv", index=False)
for x in combined_home_data.columns:
print(x)
###Output
region_id
period_begin
period_end
total_homes_sold
total_homes_sold_yoy
average_homes_sold
average_homes_sold_yoy
total_homes_sold_with_price_drops
total_homes_sold_with_price_drops_yoy
average_homes_sold_with_price_drops
average_homes_sold_with_price_drops_yoy
percent_homes_sold_with_price_drops
percent_homes_sold_with_price_drops_yoy
median_sale_price
median_sale_price_yoy
median_sale_ppsf
median_sale_ppsf_yoy
median_days_to_close
median_days_to_close_yoy
price_drops
price_drops_yoy
percent_active_listings_with_price_drops
percent_active_listings_with_price_drops_yoy
pending_sales
pending_sales_yoy
median_pending_sqft
median_pending_sqft_yoy
off_market_in_two_weeks
off_market_in_two_weeks_yoy
off_market_in_one_week
off_market_in_one_week_yoy
percent_off_market_in_two_weeks
percent_off_market_in_two_weeks_yoy
percent_off_market_in_one_week
percent_off_market_in_one_week_yoy
total_new_listings
total_new_listings_yoy
average_new_listings
average_new_listings_yoy
median_new_listing_price
median_new_listing_price_yoy
median_new_listing_ppsf
median_new_listing_ppsf_yoy
inventory
inventory_yoy
total_active_listings
total_active_listings_yoy
active_listings
active_listings_yoy
age_of_inventory
age_of_inventory_yoy
homes_delisted
homes_delisted_yoy
percent_active_listings_delisted
percent_active_listings_delisted_yoy
median_active_list_price
median_active_list_price_yoy
median_active_list_ppsf
median_active_list_ppsf_yoy
average_of_median_list_price_amount
average_of_median_list_price_amount_yoy
average_of_median_offer_price_amount
average_of_median_offer_price_amount_yoy
avg_offer_to_list
avg_offer_to_list_yoy
average_sale_to_list_ratio
average_sale_to_list_ratio_yoy
median_days_on_market
median_days_on_market_yoy
pending_sales_to_sales_ratio
pending_sales_to_sales_ratio_yoy
months_of_supply
months_of_supply_yoy
average_pending_sales_listing_updates
average_pending_sales_listing_updates_yoy
percent_total_price_drops_of_inventory
percent_total_price_drops_of_inventory_yoy
percent_homes_sold_above_list
percent_homes_sold_above_list_yoy
price_drop_percent_of_old_list_price
price_drop_percent_of_old_list_price_yoy
county_name
state_code
period_month_year
FEDFUNDS
FIPS_Code
Economic_typology_2015
CENSUS_2010_POP
ESTIMATES_BASE_2010
POP_ESTIMATE_2010
POP_ESTIMATE_2011
POP_ESTIMATE_2012
POP_ESTIMATE_2013
POP_ESTIMATE_2014
POP_ESTIMATE_2015
POP_ESTIMATE_2016
POP_ESTIMATE_2017
POP_ESTIMATE_2018
POP_ESTIMATE_2019
N_POP_CHG_2010
N_POP_CHG_2011
N_POP_CHG_2012
N_POP_CHG_2013
N_POP_CHG_2014
N_POP_CHG_2015
N_POP_CHG_2016
N_POP_CHG_2017
N_POP_CHG_2018
N_POP_CHG_2019
Births_2010
Births_2011
Births_2012
Births_2013
Births_2014
Births_2015
Births_2016
Births_2017
Births_2018
Births_2019
Deaths_2010
Deaths_2011
Deaths_2012
Deaths_2013
Deaths_2014
Deaths_2015
Deaths_2016
Deaths_2017
Deaths_2018
Deaths_2019
NATURAL_INC_2010
NATURAL_INC_2011
NATURAL_INC_2012
NATURAL_INC_2013
NATURAL_INC_2014
NATURAL_INC_2015
NATURAL_INC_2016
NATURAL_INC_2017
NATURAL_INC_2018
NATURAL_INC_2019
INTERNATIONAL_MIG_2010
INTERNATIONAL_MIG_2011
INTERNATIONAL_MIG_2012
INTERNATIONAL_MIG_2013
INTERNATIONAL_MIG_2014
INTERNATIONAL_MIG_2015
INTERNATIONAL_MIG_2016
INTERNATIONAL_MIG_2017
INTERNATIONAL_MIG_2018
INTERNATIONAL_MIG_2019
DOMESTIC_MIG_2010
DOMESTIC_MIG_2011
DOMESTIC_MIG_2012
DOMESTIC_MIG_2013
DOMESTIC_MIG_2014
DOMESTIC_MIG_2015
DOMESTIC_MIG_2016
DOMESTIC_MIG_2017
DOMESTIC_MIG_2018
DOMESTIC_MIG_2019
NET_MIG_2010
NET_MIG_2011
NET_MIG_2012
NET_MIG_2013
NET_MIG_2014
NET_MIG_2015
NET_MIG_2016
NET_MIG_2017
NET_MIG_2018
NET_MIG_2019
RESIDUAL_2010
RESIDUAL_2011
RESIDUAL_2012
RESIDUAL_2013
RESIDUAL_2014
RESIDUAL_2015
RESIDUAL_2016
RESIDUAL_2017
RESIDUAL_2018
RESIDUAL_2019
GQ_ESTIMATES_BASE_2010
GQ_ESTIMATES_2010
GQ_ESTIMATES_2011
GQ_ESTIMATES_2012
GQ_ESTIMATES_2013
GQ_ESTIMATES_2014
GQ_ESTIMATES_2015
GQ_ESTIMATES_2016
GQ_ESTIMATES_2017
GQ_ESTIMATES_2018
GQ_ESTIMATES_2019
R_birth_2011
R_birth_2012
R_birth_2013
R_birth_2014
R_birth_2015
R_birth_2016
R_birth_2017
R_birth_2018
R_birth_2019
R_death_2011
R_death_2012
R_death_2013
R_death_2014
R_death_2015
R_death_2016
R_death_2017
R_death_2018
R_death_2019
R_NATURAL_INC_2011
R_NATURAL_INC_2012
R_NATURAL_INC_2013
R_NATURAL_INC_2014
R_NATURAL_INC_2015
R_NATURAL_INC_2016
R_NATURAL_INC_2017
R_NATURAL_INC_2018
R_NATURAL_INC_2019
R_INTERNATIONAL_MIG_2011
R_INTERNATIONAL_MIG_2012
R_INTERNATIONAL_MIG_2013
R_INTERNATIONAL_MIG_2014
R_INTERNATIONAL_MIG_2015
R_INTERNATIONAL_MIG_2016
R_INTERNATIONAL_MIG_2017
R_INTERNATIONAL_MIG_2018
R_INTERNATIONAL_MIG_2019
R_DOMESTIC_MIG_2011
R_DOMESTIC_MIG_2012
R_DOMESTIC_MIG_2013
R_DOMESTIC_MIG_2014
R_DOMESTIC_MIG_2015
R_DOMESTIC_MIG_2016
R_DOMESTIC_MIG_2017
R_DOMESTIC_MIG_2018
R_DOMESTIC_MIG_2019
R_NET_MIG_2011
R_NET_MIG_2012
R_NET_MIG_2013
R_NET_MIG_2014
R_NET_MIG_2015
R_NET_MIG_2016
R_NET_MIG_2017
R_NET_MIG_2018
R_NET_MIG_2019
LT_HSD_1970
HSD_Only_1970
COLL_1TO3_1970
COLL_4_1970
PCT_LT_HSD_1970
PCT_HSD_Only_1970
PCT_COLL_1TO3_1970
PCT_COLL_4_1970
LT_HSD_1980
HSD_Only_1980
COLL_1TO3_1980
COLL_4_1980
PCT_LT_HSD_1980
PCT_HSD_Only_1980
PCT_COLL_1TO3_1980
PCT_COLL_4_1980
LT_HSD_1990
HSD_Only_1990
COLL_1TO3_1990
COLL_4_1990
PCT_LT_HSD_1990
PCT_HSD_Only_1990
PCT_COLL_1TO3_1990
PCT_COLL_4_1990
LT_HSD_2000
HSD_Only_2000
COLL_1TO3_2000
COLL_4_2000
PCT_LT_HSD_2000
PCT_HSD_Only_2000
PCT_COLL_1TO3_2000
PCT_COLL_4_2000
LT_HSD_2015_19
HSD_Only_2015_19
COLL_1TO3_2015_19
COLL_4_2015_19
PCT_LT_HSD_2015_19
PCT_HSD_Only_2015_19
PCT_COLL_1TO3_2015_19
PCT_COLL_4_2015_19
Civilian_labor_force_2000
Employed_2000
Unemployed_2000
Unemployment_rate_2000
Civilian_labor_force_2001
Employed_2001
Unemployed_2001
Unemployment_rate_2001
Civilian_labor_force_2002
Employed_2002
Unemployed_2002
Unemployment_rate_2002
Civilian_labor_force_2003
Employed_2003
Unemployed_2003
Unemployment_rate_2003
Civilian_labor_force_2004
Employed_2004
Unemployed_2004
Unemployment_rate_2004
Civilian_labor_force_2005
Employed_2005
Unemployed_2005
Unemployment_rate_2005
Civilian_labor_force_2006
Employed_2006
Unemployed_2006
Unemployment_rate_2006
Civilian_labor_force_2007
Employed_2007
Unemployed_2007
Unemployment_rate_2007
Civilian_labor_force_2008
Employed_2008
Unemployed_2008
Unemployment_rate_2008
Civilian_labor_force_2009
Employed_2009
Unemployed_2009
Unemployment_rate_2009
Civilian_labor_force_2010
Employed_2010
Unemployed_2010
Unemployment_rate_2010
Civilian_labor_force_2011
Employed_2011
Unemployed_2011
Unemployment_rate_2011
Civilian_labor_force_2012
Employed_2012
Unemployed_2012
Unemployment_rate_2012
Civilian_labor_force_2013
Employed_2013
Unemployed_2013
Unemployment_rate_2013
Civilian_labor_force_2014
Employed_2014
Unemployed_2014
Unemployment_rate_2014
Civilian_labor_force_2015
Employed_2015
Unemployed_2015
Unemployment_rate_2015
Civilian_labor_force_2016
Employed_2016
Unemployed_2016
Unemployment_rate_2016
Civilian_labor_force_2017
Employed_2017
Unemployed_2017
Unemployment_rate_2017
Civilian_labor_force_2018
Employed_2018
Unemployed_2018
Unemployment_rate_2018
Civilian_labor_force_2019
Employed_2019
Unemployed_2019
Unemployment_rate_2019
Civilian_labor_force_2020
Employed_2020
Unemployed_2020
Unemployment_rate_2020
Median_Household_Income_2019
Med_HH_Income_Percent_of_State_Total_2019
###Markdown
Exploring SigMorphon 2019 Task 2 Datasets
###Code
from itertools import chain
import os
import pandas as pd
###Output
_____no_output_____
###Markdown
Required functions and classes
###Code
class Sentence(object):
"""Sentence class with surface words, lemmas and morphological tags
"""
def __init__(self, conll_sentence):
"""Create a Sentence object from a conll sentence
Arguments:
conll_sentence: (list) list of conll lines correspond to one sentence
"""
self.surface_words = []
self.lemmas = []
self.morph_tags = []
for conll_token in conll_sentence:
if not conll_token or conll_token.startswith('#'):
continue
_splits = conll_token.split('\t')
self.surface_words.append(_splits[1])
self.lemmas.append(_splits[2])
self.morph_tags.append(_splits[5].split(';'))
def get_tags_as_str(self):
return [';'.join(morph_tags) for morph_tags in self.morph_tags]
def __repr__(self):
return "\n".join(
['Surface: {}, Lemma: {}, MorphTags: {}'.format(surface, lemma, ';'.join(morph_tags))
for surface, lemma, morph_tags in zip(self.surface_words, self.lemmas, self.morph_tags)]
)
def read_dataset(conll_file):
"""Read Conll dataset
Argumets:
conll_file: (str) conll file path
Returns:
list: list of `Sentence` objects
"""
sentences = []
with open(conll_file, 'r', encoding='UTF-8') as f:
conll_sentence = []
for line in f:
if len(line.strip())==0:
if len(conll_sentence) > 0:
sentence = Sentence(conll_sentence)
sentences.append(sentence)
conll_sentence = []
else:
conll_sentence.append(line)
return sentences
def get_stats(sentences):
"""Calculate statistics of surface words, lemmas and morphological tags in given sentences
Arguments:
sentences: (list) list of `Sentence` objects
Returns:
dict: stats dict
"""
def flatten(_list):
return list(chain(*_list))
number_of_sentences = len(sentences)
number_of_tokens = len(flatten([sentence.surface_words for sentence in sentences]))
number_of_unique_words = len(set(flatten([sentence.surface_words for sentence in sentences])))
number_of_unique_lemmas = len(set(flatten([sentence.lemmas for sentence in sentences])))
number_of_unique_tags = len(set(flatten([sentence.get_tags_as_str() for sentence in sentences])))
number_of_unique_features = len(set(flatten(flatten([sentence.morph_tags for sentence in sentences]))))
return {
'Number of sentence': number_of_sentences,
'Number of tokens': number_of_tokens,
'Number of unique words': number_of_unique_words,
'Number of unique lemmas': number_of_unique_lemmas,
'Number of unique morphological tags': number_of_unique_tags,
'Number of unique morphological features': number_of_unique_features
}
###Output
_____no_output_____
###Markdown
Create datasets and stats for each language
###Code
language_paths = ['data/2019/task2/' + filename for filename in os.listdir('data/2019/task2/')]
language_names = [filename.replace('UD_', '') for filename in os.listdir('data/2019/task2/')]
datasets_train = {}
datasets_val = {}
dataset_stats = {}
for language_path, language_name in zip(language_paths, language_names):
language_conll_files = os.listdir(language_path)
assert len(language_conll_files) == 2, 'More than 2 files'
for language_conll_file in language_conll_files:
if 'train' in language_conll_file:
datasets_train[language_name] = read_dataset(language_path + '/' + language_conll_file)
dataset_stats[language_name] = get_stats(datasets_train[language_name])
else:
datasets_val[language_name] = read_dataset(language_path + '/' + language_conll_file)
###Output
_____no_output_____
###Markdown
Data set sizes
###Code
data_sizes = []
for language_name, stats in dataset_stats.items():
row = {'Language': language_name, 'Size': stats['Number of tokens']}
data_sizes.append(row)
data_sizes_df = pd.DataFrame(data_sizes)
data_sizes_df = data_sizes_df.set_index('Language')
step = 8
for i in range(step, len(data_sizes_df), step):
data_sizes_df[i-step:i].plot.bar(figsize=(20,5))
###Output
_____no_output_____
###Markdown
Unique Surface words vs Unique Lemmas- All values are normalized by dataset sizes
###Code
surface_lemma_stats_list = []
for language_name, stats in dataset_stats.items():
row = {'Language': language_name,
'# unique surface words': stats['Number of unique words'] / stats['Number of tokens'],
'# unique lemmas': stats['Number of unique lemmas'] / stats['Number of tokens']
}
surface_lemma_stats_list.append(row)
surface_lemma_stats_df = pd.DataFrame(surface_lemma_stats_list)
surface_lemma_stats_df = surface_lemma_stats_df.set_index('Language')
step = 10
for i in range(step, len(surface_lemma_stats_df), step):
surface_lemma_stats_df[i-step:i].plot.bar(figsize=(20,5))
###Output
_____no_output_____
###Markdown
Unique morphological features vs Unique morphological tags- All values are normalized by dataset sizes
###Code
morph_stats_list = []
for language_name, stats in dataset_stats.items():
row = {'Language': language_name,
'# unique morphological features': stats['Number of unique morphological features'] / stats['Number of tokens'],
'# unique morphological tags': stats['Number of unique morphological tags'] / stats['Number of tokens']
}
morph_stats_list.append(row)
morph_stats_df = pd.DataFrame(morph_stats_list)
morph_stats_df = morph_stats_df.set_index('Language')
step = 10
for i in range(step, len(morph_stats_df), step):
morph_stats_df[i-step:i].plot.bar(figsize=(20,5))
###Output
_____no_output_____
###Markdown
Predicting yield at University of California schoolsIn our project, we wanted to work with admission data from undergraduate institutions to learn more about the admission process in a more scientific context.**Our main modelling goal for this project will be to determine the yield at an undergraduate school given information about the admitted class.** We believe it is a very interesting and practical question. Every year, during the admission season, colleges have to select students for the incoming freshmen year, but do not know how many of their offers will be accepted. If too few students accept their offers, the freshmen class will be under-enrolled, and school's resources will not be fully used. However, if too many students are admitted, the school will need to spend more resources to accommodate the unusually high number of students. Unfortunately, **admission data is legally protected, and only highly anonymized datasets are publicly available.** For this project, we decided to use the data from the University of California infocenter. The particular datasets we were interested in can be found here: https://www.universityofcalifornia.edu/infocenter/admissions-source-school. The data contains information about: - The number of applying, admitted and accepted students from each high school - The average GPA of applying, admitted and accepted students at each high school - Demographic data (students' race/ethnicity) - Locations of the high schools The data is sorted by year and University of California campus.We believe that the predictive power of these datasets might not be enough to accurately predict the yield (it only gives us access to very basic meta-information). Therefore, if the evaluations of our models show poor results, we are planning to use demographic information about the surveyed high schools/counties. To do that, we will most likely use the https://data.ca.gov/ repository. First look at our dataOur data is split into two datasets. The first one (which we will call `gpas` in the later parts of this notebook) contains mean GPA information by: - University of California campus - High School - Year - Category (applied, admitted, enrolled) Whereas the second set (which we will call `counts`) contains the number of students in each of the categories *(applied, admitted, enrolled)*. The data is also grouped by: - University of California campus - High School - Year
###Code
import pandas as pd
%matplotlib inline
import pylab as plt
import numpy as np
import scipy as sc
import scipy.stats
gpas = pd.read_csv('data/FR_GPA_by_Inst_data_converted.csv')
counts = pd.read_csv('data/HS_by_Year_data_converted.csv')
###Output
_____no_output_____
###Markdown
After we have loaded our data, we will display the first few rows in each dataset.
###Code
gpas.head(12)
counts.head(6)
###Output
_____no_output_____
###Markdown
About the structure of the dataUnfortunately, the datasets were given to us in a fairly uncomfortable format. Each of the rows specifies: - Name of the high school - City of the high school - County/State/Teritory of the high school - University of California campus - Year. However, instead of specifying the numerical data in designated columns, the datasets use the *measure name/measure value* approach. That means, that **only one numerical value is given per row.** Instead of putting multiple measurements per each row, the datasets' designers decided to create multiple copies of each row with one measurement per copy. The `Measure Names` column is used to indicate the type of the measurement in the row. The `Measure Values` column specifies the actual value of the measurement.For example, a row of type:| campus_name | school_name | avg_enrolled_gpa | avg_accepted_gpa | enrolled_student_count | accepted_student_count ||-------------|-------------|------------------|------------------|------------------------|------------------------|| Campus A | School B | 2.0 | 3.0 | 50 | 80 |Would be converted to multiple rows like:| campus_name | school_name | measurement name | measurement value ||-------------|-------------|------------------------|-------------------|| Campus A | School B | avg_enrolled_gpa | 2.0 || Campus A | School B | avg_accepted_gpa | 3.0 || Campus A | School B | enrolled_student_count | 50 || Campus A | School B | accepted_student_count | 80 |Moreover, these rows have been split to two separate files, which further complicates working with the data. We are expecting, that we will need to put significant effort into the data cleaning part of the project. Data explorationIn order to better understand the data we will be working with, we decided to perform a few data exploration tasks. Ratio of NaN fieldsOne of the concerning properties of our datasets was the large number of `NaN` fields. In order to anonymize the data, the Unviersity of California decided to remove information about GPAs for high schools with less than 3 student datapoints, and count information for high schools with less than 5 datapoints.In this exercise, we dicided to find out the ratio of `NaN` fields to actual fields.
###Code
gpas_row_count = len(gpas)
gpas_not_nan_count = gpas[~gpas['Measure Values'].isnull()]['Measure Values'].count()
gpas_nan_ratio = gpas_not_nan_count/gpas_row_count
print('Number of rows in the GPA table: ', gpas_row_count)
print('Number of valid GPA values: ', gpas_not_nan_count)
print('Ratio of valid GPA values to all values: ', gpas_nan_ratio)
###Output
Number of rows in the GPA table: 888066
Number of valid GPA values: 570305
Ratio of valid GPA values to all values: 0.6421876301986564
###Markdown
Next, we repeat the same process for the `student count` data:
###Code
student_num_row_count = len(counts)
student_num_not_nan_count = counts[~counts['Measure Values'].isnull()]['Measure Values'].count()
student_num_nan_ratio = student_num_not_nan_count/student_num_row_count
print('Number of rows in the student count table: ', student_num_row_count)
print('Number of valid student count values: ', student_num_not_nan_count)
print('Ratio of valid student count values to all values: ', student_num_nan_ratio)
###Output
Number of rows in the student count table: 1048575
Number of valid student count values: 737957
Ratio of valid student count values to all values: 0.7037713086808287
###Markdown
ResultsAs we can see, a large number of rows in our dataset **do not contain valid data.** We will have to properly deal with this problem while working on our data cleaning component. High school applicant GPAsWe thought it would be interesting to learn which schools in our datasets sent the most qualified candidates as measured by student GPA. In order to find that information, we decided to sort the schools by their mean applicant GPA.First we will show the best schools by applicant GPA:
###Code
school_gpas = gpas[gpas['Measure Names'] == 'App GPA'].\
groupby('Calculation1')['Measure Values'].\
mean()
school_gpas.sort_values(ascending=[False])[0:10]
###Output
_____no_output_____
###Markdown
Next we will look at the schools with lowest GPAs:
###Code
school_gpas.sort_values(ascending=[True])[0:10]
###Output
_____no_output_____
###Markdown
Interestingly, **all of these schools were located in California**. This brings us to another interesting question about our dataset composition. High school location breakdownIn our previous excercise we noticed that the top 10 "best" schools and top 10 "worst" schools in our dataset were located in California. In this section, we would like to learn how many of the considered schools were located: - in California - in the US but outside California - outside of the US In order to perform this task, we notice the following conjecture about the format of the `County/State/Territory` column in the `counts` dataset: - If the school is located in California, the column contains the county name - If the school is located in the US, the column contains the name of the state - If the school is located outside of the US, the column contains the name of the country (in all caps)First we will validate our data:
###Code
# We extracted the list of California counties, and US teritories from the list of unique locations
ca_counties = ['Alameda', 'Alpine', 'Amador', 'Butte', 'Calaveras', 'Colusa', 'Contra Costa', 'Del Norte', 'El Dorado', 'Fresno', 'Glenn', 'Humboldt', 'Imperial', 'Inyo', 'Kern', 'Kings', 'Lake', 'Lassen', 'Los Angeles', 'Madera', 'Marin', 'Mariposa', 'Mendocino', 'Merced', 'Modoc', 'Mono', 'Monterey', 'Napa', 'Nevada', 'Orange', 'Placer', 'Plumas', 'Riverside', 'Sacramento', 'San Benito', 'San Bernardino', 'San Diego', 'San Francisco', 'San Joaquin', 'San Luis Obispo', 'San Mateo', 'Santa Barbara', 'Santa Clara', 'Santa Cruz', 'Shasta', 'Sierra', 'Siskiyou', 'Solano', 'Sonoma', 'Stanislaus', 'Sutter', 'Tehama', 'Trinity', 'Tulare', 'Tuolumne', 'Ventura', 'Yolo', 'Yuba']
us_states_and_territories = ['American Samoa', 'Northern Mariana Islands', 'U.S. Armed Forces –\xa0Pacific', 'U.S. Armed Forces –\xa0Europe', 'Puerto Rico', 'Guam', 'District of Columbia', 'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California', 'Colorado', 'Connecticut', 'Delaware', 'Florida', 'Georgia', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachusetts', 'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico', 'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon', 'Pennsylvania', 'Rhode Island', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virginia', 'Washington', 'West Virginia', 'Wisconsin', 'Wyoming']
all_locations = list(counts['County/State/ Territory'].unique())
country_names = [l for l in all_locations
if l not in ca_counties and
l not in us_states_and_territories and
l is not np.nan]
# Sanity check - our contry_names should be in all caps:
for country_name in country_names:
assert(country_name == country_name.upper())
###Output
_____no_output_____
###Markdown
Next we will perform the actual calculations:
###Code
total_schools = counts['Calculation1'].unique().size
california_schools = counts[counts['County/State/ Territory'].isin(ca_counties)]\
['Calculation1'].unique().size
us_non_ca_schools = counts[counts['County/State/ Territory'].isin(us_states_and_territories)]\
['Calculation1'].unique().size
foreign_schools = counts[counts['County/State/ Territory'].isin(country_names)]\
['Calculation1'].unique().size
print('Total number of schools: ', total_schools)
print('Ratio of schools in california: ', california_schools/total_schools)
print('Ratio of schools in the US (but not CA): ', us_non_ca_schools/total_schools)
print('Ratio of foreign schools: ', foreign_schools/total_schools)
###Output
Total number of schools: 3077
Ratio of schools in california: 0.31036724081897954
Ratio of schools in the US (but not CA): 0.512187195320117
Ratio of foreign schools: 0.17679558011049723
###Markdown
Raw data summaryTo summarize, we belive our data contains very interesting information that could be helpful to predict the student yield ratio. However, due to a peculiar format of the data, we will need to put a large amount of work into data cleanup, and preprocessing. We will move on to that task in our `preprocessing.ipynb` notebook. Visualizations on the preprocessed dataTo show the type of information stored in our dataset, we decided to show it on a variety of different graphs.
###Code
packed = pd.read_csv('data/processed.csv')
###Output
_____no_output_____
###Markdown
Applying vs Admitted vs Enrolled GPAWe wanted to see what the differences between applying, admitted, and enrolled students' GPAs are. In order to do that, we used our `*_num` and `*_gpa` columns to properly compute the average GPA of students at the UC universities.Unsurprisingly, the applying student pool had the lowest mean GPA. Moreover, the enrolled student pool had lower GPAs than admitted students. This makes sense, since the students from the top of the accepted pool are more likely to get offers from other universities.
###Code
def avg_gpa_finder(data):
d = {}
d['adm_gpa'] = (data['adm_gpa'] * data['adm_num']).sum() / (data[data['adm_gpa'].notnull()]['adm_num'].sum())
d['app_gpa'] = (data['app_gpa'] * data['app_num']).sum() / (data[data['app_gpa'].notnull()]['app_num'].sum())
d['enr_gpa'] = (data['enr_gpa'] * data['enr_num']).sum() / (data[data['enr_gpa'].notnull()]['enr_num'].sum())
return pd.Series(d, index=['adm_gpa', 'app_gpa', 'enr_gpa'])
packed.groupby(['campus']).apply(avg_gpa_finder).plot.bar()
###Output
_____no_output_____
###Markdown
Average Admitted GPA Inflation over the yearsWe are interested in exploring how the average admitted, enrolled and applied GPAs have changed over the years. The line plots describe the trend, in which the GPA tends to increase before 2007 and suddenly drops afterwards. After 2010, the increasing trend of GPAs goes on. So, during recent years, GPA does get inflated. This suggests to us that, in order to predict the ratio between the applicants and the students who were actually enrolled, we might need to look at data in recent years.
###Code
packed.groupby(['year']).apply(avg_gpa_finder).plot.line()
###Output
_____no_output_____
###Markdown
Admitted Students vs Enrolled Students The goal of this project is to predict the ratio between the enrolled students and the admitted students in the future. Therefore, a scatterplot between the enrolled and the admitted from the past would give us an indication of how our model needs to be built. The data regarding "Universitywide" is excluded from this plot because we are interested in each individual university.The ratio of enrolled to admitted could be a good metric for the desirability of a campus. For instance, Berkely and Santa Barbara admitted a similar amount of students, but many more students enrolled at Berkely, indicating that Berkely could be more desirable for students.
###Code
def adm_enr_num(data):
d = {}
d['adm_num'] = data['adm_num'].sum()
d['enr_num'] = data['enr_num'].sum()
return pd.Series(d, index=['adm_num', 'enr_num'])
enr_adm_num_c = packed[packed['campus'] != 'Universitywide'].groupby(['campus']).apply(adm_enr_num)
x, y = enr_adm_num_c.adm_num, enr_adm_num_c.enr_num # should exclude the Universitywide data
campus_names = ['Berkeley', 'Irvine', 'Davis', 'Los Angeles', 'Merced', 'Riverside', 'San Diego',
'Santa Barbara', 'Santa Cruz']
campus_names.sort()
plt.scatter(x, y)
plt.xlabel('admitted')
plt.ylabel('enrolled')
plt.title('Number enrolled vs admitted by UC campus')
for i in range(0, len(campus_names)):
plt.annotate(campus_names[i], (x[i], y[i]))
###Output
_____no_output_____
###Markdown
Explore Kaggle Goodreads-books dataset1. Read the dataset into a dataframe and print its shape2. Check for invalid values in the dataset3. Know the data types of variables4. Describe the data5. Make Histograms and Box-Plots and look for outliers
###Code
#All imports
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn import preprocessing
#Load the csv file as Pandas dataframe and check its shape
#Note the warning below: The data contains few erroneous rows that have extra values in its 11th column;
#read_csv function skips these erroneous cases from our dataframe df.
#The original csv file
df = pd.read_csv("books.csv", error_bad_lines = False)
print("The data contains {0} Rows and {1} Columns".format(df.shape[0],df.shape[1]))
###Output
The data contains 13714 Rows and 10 Columns
###Markdown
a. Peek into first 5 rows and the column names of the dataframe
###Code
#Let's look at the first 5 rows of the data
#We do see the 10 column names and clearly J.K. Rowling's Harry Potter books...yaay :-)!
df.head()
# print column names
print("Column names: {0}".format(list(df.columns)))
###Output
Column names: ['bookID', 'title', 'authors', 'average_rating', 'isbn', 'isbn13', 'language_code', '# num_pages', 'ratings_count', 'text_reviews_count']
###Markdown
b. There are no invalid entries in the data. It is already clean.
###Code
#Check if the data has any null values
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
c. Explore the variabels to understand the data better.
###Code
#Get column information
df.info()
"""
From the column information above, we see that the following variables are numerical in nature,
i.e. these have either int64 or float64 types.
________________________________________________
| |
Variable # | Variable Name | Variable Type
________________________________________________
1 bookID int64
4 average_rating float64
6 isbn13 int64
7 # num_pages int64
8 ratings_count int64
9 text_reviews_count int64
Next we will peek into the counts, mean, std, range and percentiles of all the continuous varibles in data.
"""
continuousVars = ['bookID', 'average_rating', 'isbn13','# num_pages','ratings_count', 'text_reviews_count']
df[continuousVars].describe()
###Output
_____no_output_____
###Markdown
d. Exploring the continuous variables in data 1. Build histograms to take a peak at the counts.
###Code
fig = plt.figure(figsize = (20,25))
ax = fig.gca()
df[continuousVars].hist(ax = ax)
plt.show()
###Output
c:\anaconda\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
2. Drawing a Normal curve on the histograms could help understand the distribution type
###Code
#Think it would be nice to draw a normal curve on the histograms to check out the skewness of the data distribution.
# The function below fits a normal distribution to the data
# It is named PlotHistogramsWithNormalCurve and the following are the parameters,
# dfCol - The univariate column for which to plot the histogram (pandas vector)
# varName - Column name to print on titles (string)
# bins - Preferred bins in the histogram (default = 20)
# color - Preferred color of histogram (default is blue)
def PlotHistogramsWithNormalCurve(dfCol, varName, bins=20, color='b'):
dMean, dStd = norm.fit(dfCol)
plt.figure(figsize = (8, 8))
# Plot hist
plt.hist(dfCol, bins, density=True, alpha=0.6, color=color)
# Plot PDF.
xmin, xmax = plt.xlim()
xlin = np.linspace(xmin, xmax, 100)
pdf = norm.pdf(xlin, dMean, dStd)
plt.plot(xlin, pdf, 'k', linewidth=2)
title = "Fit results for [" + varName + "]: Mean = %.4f, Std. Dev, = %.4f" % (dMean, dStd)
plt.title(title)
plt.show()
###Output
_____no_output_____
###Markdown
' num_pages' has a left skewed distribution
###Code
PlotHistogramsWithNormalCurve(df['# num_pages'], "# num_pages")
###Output
_____no_output_____
###Markdown
'average_rating' is Normally distributed
###Code
PlotHistogramsWithNormalCurve(df['average_rating'], "average_rating")
###Output
_____no_output_____
###Markdown
'bookID' is Uniformly distributed??
###Code
PlotHistogramsWithNormalCurve(df['bookID'], "bookID")
###Output
_____no_output_____
###Markdown
'isbn13' shows not much of a distribution, mostly falls into one bin
###Code
PlotHistogramsWithNormalCurve(df['isbn13'], "isbn13")
###Output
_____no_output_____
###Markdown
'ratings_count' is possibly left skewed but could there be extreme values in the distribution causing it to be skewed?
###Code
PlotHistogramsWithNormalCurve(df['ratings_count'], "ratings_count")
###Output
_____no_output_____
###Markdown
'text_reviews_count' is possibly left skewed but could there be extreme values in the distribution causing it to be skewed?
###Code
#Possibly left skewed but looks like there are some extreme values in the distribution
PlotHistogramsWithNormalCurve(df['text_reviews_count'], "text_reviews_count")
###Output
_____no_output_____
###Markdown
3. Box-plots could help detect variables with outliers Looks like there are couple of outlier values that are 4 and 5 millon rating_counts that stretch the y-axis scale of the box plots
###Code
plt.figure(figsize = (10, 10))
df.boxplot(column= ['# num_pages', 'average_rating', 'ratings_count', 'text_reviews_count'])
plt.show()
###Output
_____no_output_____
###Markdown
Clearly we need to normalize this data to see all variables on the same scale
###Code
df2 = df[(df['ratings_count'] < 1000)]
plt.figure(figsize = (10, 10))
df2.boxplot(column= ['# num_pages', 'average_rating', 'ratings_count', 'text_reviews_count'])
plt.show()
###Output
_____no_output_____
###Markdown
The normalized box-plot clearly fits our variables on the same scale and also shows many values outside of the Inter Quartile Range (IQR), min and max values
###Code
# Create varsToNormalize, where all the varsToNormalize values are treated as floats
varsToNormalize = df[['# num_pages', 'average_rating', 'ratings_count', 'text_reviews_count']].values.astype(float)
# Create a minimum and maximum preprocessing object
range_Scaler = preprocessing.MinMaxScaler()
# Create an object to transform the data to fit minmax processor
vars_Scaled = range_Scaler.fit_transform(varsToNormalize)
# Run the normalizer on the dataframe
df_normalized = pd.DataFrame(vars_Scaled)
plt.figure(figsize = (10, 10))
df_normalized.boxplot()
plt.show()
###Output
_____no_output_____
###Markdown
e. Lets check out the categorical variables in data
###Code
categoricalVars = ['title', 'authors', 'isbn', 'language_code']
df[categoricalVars].describe()
###Output
_____no_output_____
###Markdown
Sum of Targets
###Code
ax = train_targets_scored.sum(axis = 0).iloc[1:].plot.bar(figsize=(20,10), title = "Sum of Targets")
for i, t in enumerate(ax.get_xticklabels()):
if (i % 5) != 0:
t.set_visible(False)
ax.set_ylabel("Sum")
ax.tick_params(axis='y', labelsize=14)
ax.tick_params(axis='x', labelsize=14)
plt.savefig('Sum_Of_Targets.pdf')
###Output
_____no_output_____
###Markdown
General Distribtuion
###Code
Genes = train_features.iloc[:,4:775] #Only Gene Values
Cells = train_features.iloc[:,776:876] #Only Cell Values
GeneSum = np.sum(Genes, axis = 1)#Engineer features
CellSum = np.sum(Cells, axis = 1)
Genemax = np.max(Genes, axis = 1)
Cellmax = np.max(Cells, axis = 1)
Genemin = np.min(Genes, axis = 1)
Cellmin = np.min(Cells, axis = 1)
Genemean = np.mean(Genes, axis = 1)
Cellmean = np.average(Cells, axis = 1)
Genestd = np.std(Genes, axis = 1)
Cellstd = np.std(Cells, axis = 1)
Cellstd = pd.DataFrame(data=Cellstd, columns=['Cell Value'])
Cellstd.plot.hist( bins = 30,title = "Plot of Std of Cells")
plt.savefig('Plot of Std of Cells.pdf')
Cellmean = pd.DataFrame(data=Cellmean, columns=['Cell Value'])
Cellmean.plot.hist(bins = 30, title = "Plot of Mean of Cells")
plt.savefig('Plot of Mean of Cells.pdf')
Cellmin = pd.DataFrame(data=Cellmin, columns=['Cell Value'])
Cellmin.plot.hist( bins = 30,title = "Histogram Plot of Min of Cells")
plt.savefig('Plot of Min of Cells.pdf')
Cellmax = pd.DataFrame(data=Cellmax, columns=['Cell Value'])
Cellmax.plot.hist( bins = 30,title = "Histogram Plot of Max of Cells")
plt.savefig('Plot of Max of Cells.pdf')
CellSum = pd.DataFrame(data=CellSum, columns=['Cell Value'])
CellSum.plot.hist( bins = 30,title = "Histogram Plot of Sum of Cells")
plt.savefig('Plot of Sum of Cells.pdf')
GeneSum = pd.DataFrame(data=GeneSum, columns=['Gene Value'])
GeneSum.plot.hist( bins = 30,title = "Histogram Plot of Sum of Genes")
plt.savefig('Plot of Sum of Genes.pdf')
Genemean = pd.DataFrame(data=Genemean, columns=['Gene Value'])
Genemean.plot.hist( bins = 30,title = "Histogram Plot of Mean of Genes")
plt.savefig('Plot of Mean of Genes.pdf')
Genemin = pd.DataFrame(data=Genemin, columns=['Gene Value'])
Genemin.plot.hist( bins = 30,title = "Histogram Plot of Min of Genes")
plt.savefig('Plot of Min of Genes.pdf')
Genemax = pd.DataFrame(data=Genemax, columns=['Gene Value'])
Genemax.plot.hist( bins = 30,title = "Histogram Plot of Max of Genes")
plt.savefig('Plot of Max of Genes.pdf')
Cellmean = np.average(Cells, axis = 1)
for i in range(0,len(Cellmean)): #find id's with low means so they can be visualize in next steps
if Cellmean[i] < -9.8:
print(i)
###Output
1021
7034
9588
10399
18553
20508
###Markdown
Individual Plots
###Code
# gene_expression values for 1st sample
train_features.iloc[0, 4:4+772].plot()
plt.title('Gene Expressions (sample 0)')
plt.savefig('GeneExpressions0.pdf')
plt.show()
# Sorted gene_feature values for 1st sample
train_features.iloc[0, 4:4+772].sort_values().plot()
plt.title('Gene Expressions (Sorted by Values) (sample 0)')
plt.savefig('GeneExpressions0Sorted.pdf')
#plt.show()
# Checking progression of gene_expressions values
# gene_expression values for 6th sample
train_features.iloc[6, 4:4+772].plot()
plt.title('Gene Expressions (6)')
plt.savefig('GeneExpressionscsd6.pdf')
plt.show()
# Sorted gene_feature values for 6th sample
train_features.iloc[6, 4:4+772].sort_values().plot()
plt.title('Gene Expressions (Sorted by Values) (6)')
plt.savefig('GeneExpressions6Sorted.pdf')
plt.show()
# Checking progression of gene_expressions values
# gene_expression values for 55th sample
train_features.iloc[55, 4:4+772].plot()
plt.title('Gene Expressions (55)')
plt.savefig('GeneExpressions55.pdf')
plt.show()
# Sorted gene_feature values for 55th sample
train_features.iloc[55, 4:4+772].sort_values().plot()
plt.title('Gene Expressions (Sorted by Values) (55)')
plt.savefig('GeneExpressions55Sorted.pdf')
plt.show()
# Checking progression of gene_expressions values
# gene_expression values for 1021st sample
train_features.iloc[1021, 4:4+772].plot()
plt.title('Gene Expressions (1021)')
plt.savefig('GeneExpressions1021.pdf')
plt.show()
# Sorted gene_feature values for 1021st sample
train_features.iloc[1021, 4:4+772].sort_values().plot()
plt.title('Gene Expressions (Sorted by Values) (1021)')
plt.savefig('GeneExpressions1021Sorted.pdf')
plt.show()
# Checking progression of cell viability values
# cell viability values for 1st sample
train_features.iloc[0, 4+772:].plot()
plt.title('Cell Viability (0)')
plt.savefig('CellViability1.pdf')
plt.show()
# Sorted gene_feature values for 1st sample
train_features.iloc[0, 4+772:].sort_values().plot()
plt.title('Cell Viability (Sorted by Values) (0)')
plt.savefig('CellViability1Sorted.pdf')
plt.show()
# Checking progression of cell viability values
# cell viability values for 6th sample
train_features.iloc[6, 4+772:].plot()
plt.title('Cell Viability (6)')
plt.savefig('CellViability6.pdf')
plt.show()
# Sorted gene_feature values for 6th sample
train_features.iloc[6, 4+772:].sort_values().plot()
plt.title('Cell Viability (Sorted by Values) (6)')
plt.savefig('CellViability6Sorted.pdf')
plt.show()
# Checking progression of cell viability values
# cell viability values for 55th sample
train_features.iloc[55, 4+772:].plot()
plt.title('Cell Viability (55)')
plt.savefig('CellViability55.pdf')
plt.show()
# Sorted gene_feature values for 55th sample
train_features.iloc[55, 4+772:].sort_values().plot()
plt.title('Cell Viability (Sorted by Values) (55)')
plt.savefig('CellViability55Sorted.pdf')
plt.show()
# Checking progression of cell viability values
# cell viability values for 1021st sample
train_features.iloc[1021, 4+772:].plot()
plt.title('Cell Viability (1021)')
plt.savefig('CellViability1021.pdf')
plt.show()
# Sorted gene_feature values for 1021st sample
train_features.iloc[1021, 4+772:].sort_values().plot()
plt.title('Cell Viability (Sorted by Values) (1021)')
plt.savefig('CellViability1021Sorted.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Distribution Plots
###Code
import seaborn as sns
sns.distplot(train_features.loc[:, 'g-1']).set_title("Distribution of G-1")
plt.savefig('G-1Distribution.pdf')
sns.distplot(train_features.loc[:, 'g-80']).set_title("Distribution of G-80")
plt.savefig('G-80Distribution.pdf')
sns.distplot(train_features.loc[:, 'c-1']).set_title("Distribution of C-1")
plt.savefig('C-1Distribution.pdf')
sns.distplot(train_features.loc[:, 'c-80']).set_title("Distribution of C-80")
plt.savefig('C-80Distribution.pdf')
###Output
_____no_output_____
###Markdown
Data to Decision on Black Friday Sales How do we increase profit?Is perhaps the most common question businesses ask themselves. And one of the hardest problems to solve due to the many influencing factors. But maybe data can help! Growing business or budding machine learning engineer this blog takes you through a step by step breakdown on how to see an ambiguous business problem through data exploration, model selection, training and finally deployment using Google's latest and greatest product - Vertex AI and the trusty TensorFlow.We will be using historical transaction data on Black Friday Sales [REFERENCE] to build a product recommender and purchase value prediction model for each customer based on their profile. But why would this increase profit? Businesses are overwhelmed by decisions, choices and factors that could impact their bottom line. However, we believe that increase in profit is closely tied to the understanding of the customer and their needs. The models developed in this blog will provide key insights to retailers leading to business decisions that are most likely increased profits.The major steps in this process are:1. Business Problem Definition2. Data Exploration3. Feature Engineering4. Preprocessing Pipeline5. Model Selection6. Model Training and Development7. Model Deployment8. Model Evaluation9. Testing10. Modifications11. Code and Dataset 1. Business Problem Definition```Partners must describe:The business question/goal being addressed.The ML use case.How ML solution is expected to address the business question/goal?Evidence must include (in the Whitepaper) a top-line description of the business question/goal being addressed in this demo, and how the proposed ML solution will address this business goal.```---Black Friday the premier retail sales event in many countries. Occurring in late November, Black Friday rings in the Christmas shopping season. Retail stores of all kinds have printers overflowing with red "SALE" signs and enter into discount wars with their competitors. This season is crucial for the economy and retails as many consumers hold out for these sales due to the large discounts and great offers. For retailers, this is an opportunity to move overstocked items at the lowest possible process and move season items off the shelf.On Black Friday 2021, US consumers spent 8.92 billion USD at online sales for goods after Thanksgiving. This only just falls short of the previous years record-setting 9.03 billion USD according to Adobe. Due to the growing popularity of Black Friday Sales, Retailers spent a significant amount of the year and resources planning their sales to maximise profits. A typical approach to analysis for the upcoming year is to assess historic sales information from the previous year.Using this data we will aim to **develop a deeper understanding of customer profiles, their product preferences and likely spend during Black Friday sales.** 2. Data Exploration and Feature Engineering```Partners must describe the following:How and what type of data exploration was performed?What decisions were influenced by data exploration?Evidence must include a description (in the Whitepaper) of the tools used and the type(s) of data exploration performed, along with code snippets (that accomplish the data exploration). Additionally, the whitepaper must describe how the data/model algorithm/architecture decisions were influenced by the data exploration.Partners must describe the following:What feature engineering was performed?What features were selected for use in the ML model and why?Evidence must include a description (in the Whitepaper) of the feature engineering performed (and rationale for the same), what original and engineered features were selected for incorporation as independent predictors in the ML model, and why. Evidence must include code snippets detailing the feature engineering and feature selection steps.```---To build our model we will be using the Black Friday Sales Dataset provided by [REFERENCE]. This dataset has been used to host a competition where participants try to predict the customer spending habits of “ABC Private Limited'', specifically total purchase amount for a given product over the course of a month. They have shared purchase summary of various customers for selected high volume products from last month. The data set also contains customer demographics (age, gender, marital status, city_type, stay_in_current_city), product details (product_id and product category) and Total purchase_amount from last month.The dataset is stored in two comma-seperated vales files (CSV): train.csv and test.csv.We will use Python, Pandas, Seaborn, and MatPlotLib to explore the data.Loading in the dataframe allows us to observe the different features (columns) and samples (rows).
###Code
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import os
export_path = os.path.join("/content/drive/MyDrive/Black Friday Data/", "20220120-085959")
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('/content/drive/MyDrive/Black Friday Data/train.csv')
df.head()
###Output
_____no_output_____
###Markdown
The dataset contains 550,068 entries and 12 features. These features are. The feature names and their description are shown in the below table:| Column Name | Description | Datatype | |-------------|-------------|----------|| User_ID | Unique ID of the customer| int64 || Product ID | Unique ID of the product | object || Gender | Sex of the customer | object || Age | Age of the customer | object || Occupation | Occupation code of the customer | int64 || City_Category | City of the customer | object || Stay_In_Current_City_Years | Number of years the customer has lived in the city | object | | Marital_Status | Maritial status of the customer | int64 || Product_Category_1 | Category of product | int64 || Product_Category_2 | Category of product | float64 || Product_Category_3 | Category of product | float64 || Purchase | Total amount spent on a particualar product over the past month | int64 |Each feature in the table above also has different data types such as strings, categorical and numerical.Whilst most of the features are self-explanatory, it is worth noting that the Purchase feature represents the total amount spent over the last month. Furthermore the Product categories and the areas that the given product can fall under. For example a mobile phone might fall under electronics, mobile and photography. Intially these purchase values of $300,000 a month seemed quite high to us. However, upon further investigation of the origins of the dataset, the dataset originated in India and hence the currency is likely to be in rupees rather than dollars. A short but important lesson in understand the units and origin of your data!We analyse the Black Friday sales dataset further with the help of the Pandas library and the functions `df.info()`, `df.describe()` and `df.nunique()`.
###Code
# Datatype info
df.info()
df.describe()
df.nunique()
zero_df = df[df['Purchase'] == 0]
zero_df.shape
###Output
_____no_output_____
###Markdown
Through this we find a variety of useful information as the datatypes of each of the features and their associated statistical information. It's worth noting that the statistical information only makes sense for non-categorial data. I.e. the mean of User_IDs is not useful for us to know but the mean Purchase value is.We also learn that Product_Category_2 and Product_Category_3 both are missing data. We can address null values in a few different ways:- Ignore the missing values. Missing values under 10% can generally be ignored, except when the data is dependant on the missing information.- Drop the missing values.- Drop the feature. If the number of missing values in a feature is very high then the feature should be left out of the analysis. Generally anymore than 5% of the data is missing from a feature, then that feature should be left out.- Compute a value. The value can be imputed by the mean/mode/median. Other methods include regression techniques and K-Nearest Neighbour Imputation.Since these are used for categorising the Product_ID they are not missing at random. But instead we assume that a the product only falls in a select number of categories.Out of the 550,068 records, Product_Category_2 has 376,430 non-null records and Product_Category_3 has 166,821 records that are not null. This is ~31.5% and ~70% of data points missing. So we can conclude that we can safely drop Product_Category_3 however, Product_Category_2 can be kept.To more deeply understand the structure and distributions of the data we use MatPlotLib and Seaborn to plot various graphs. Firstly, we plot the distribution of the Purchase (value over a month) for each customer. There are three clear peaks around ~\$9000, \$16,000 and \$19,000. This suggests that there are potentially three types of customers profiles within the dataset.
###Code
plt.style.use('fivethirtyeight')
plt.figure(figsize=(15,10))
sns.distplot(df['Purchase'], bins=25)
###Output
_____no_output_____
###Markdown
Next we explore gender as a feature in the dataset by plotting the count of product purchases by males and females:
###Code
# distribution of numeric variables
sns.countplot(df['Gender'])
###Output
_____no_output_____
###Markdown
Using customer gender information is a contentious topic that is still actively being debated by the data science community [REFERENCE]. For the purposes of this demonstration gender has been kept as a feature however, it is strongly advised that before developing such a model for a production setting that a privacy and ethics assessment is undertaken in according with the Australian government's AI Ethics Framework and 8 Ethics Principles [https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles].While this graph can be interpre shows males representing the majority of purchases it can be easily missunderstood. We must remember that the Purchase feature represents the total spent over the last month on a single product. This graph shows that males purchase a larger variety of products, however does not represent the actual **amount** that each gender spends.To find the value of spend by males and females we use a simple sum and plot the collective purchase amount by gender:
###Code
gender_df = df.groupby(["Gender"]).sum()
gender_df
ax = gender_df.plot.bar(y='Purchase', rot=0, title='Total Purchase Cost By Gender')
###Output
_____no_output_____
###Markdown
The above plot shows the total purchase cost by gender. The ratio of Purchase amount between females to males is approximately 1:3 and hence males are likely to spend more on Black Friday sales. Given this result gender can be safely assumed to be an important categorising factor in customer profile when predicting Purchase amounts.
###Code
sns.countplot(df['Age'])
###Output
_____no_output_____
###Markdown
There is a clear peak in the number of products purchased by 26-35 years old. Assuming that volume correlates with value we can say that there is an obvious target age group when looking to maximise sales. We consider the marital status of customer by assuming that the value '0' represents 'single' while '1' represents married. As a sum single customers have historically purchased more products than married customers on Black Friday. Although this data is provided it is worth noting that in the real world this information is likely difficult to collect and is not something that can be easily inferred or requested from customers to provide. Regardless, for the purposes of this demonstration we will include the martial status as a feature for our recommendation and purchase prediction models.
###Code
sns.countplot(df['Marital_Status'])
###Output
_____no_output_____
###Markdown
During the data exploration phase we find a number of different categories that are masked/encoded without any context and hence they only provide us some minor insights.These masked/encoded categories include:- Occupation- Product_Category_1- Product_Category_2- Product_Category_3- City
###Code
sns.countplot(df['Occupation'])
sns.countplot(df['Product_Category_1'])
sns.countplot(df['Product_Category_2'])
sns.countplot(df['Product_Category_3'])
sns.countplot(df['City_Category'])
###Output
_____no_output_____
###Markdown
Although there is limited context known about the city categories and their distribution, one of the tangible data points is on how long each customer has stayed in their current city. We imagine that information might be useful in the case that 'locals know the best deals'.The plot below shows that the most common customer profile is of those that have only been in their current city for 1 year.
###Code
sns.countplot(df['Stay_In_Current_City_Years'])
###Output
_____no_output_____
###Markdown
Understanding how features relate is a critical step in the ML process. We compare a variety of different features using bivariate analysis.
###Code
# Bivariate Analysis
occupation_plot = df.pivot_table(index='Occupation',values='Purchase', aggfunc=np.mean)
occupation_plot.plot(kind='bar', figsize=(13, 7))
plt.xlabel('Occupation')
plt.ylabel('Purchase')
plt.title('Occupation and Purchase Analysis')
plt.xticks(rotation=0)
plt.show()
age_plot = df.pivot_table(index='Age',values='Purchase', aggfunc=np.mean)
age_plot.plot(kind='bar', figsize=(13, 7))
plt.xlabel('Age')
plt.ylabel('Purchase')
plt.title('Age and Purchase Analysis')
plt.xticks(rotation=0)
plt.show()
gender_plot = df.pivot_table(index='Gender',values='Purchase', aggfunc=np.mean)
gender_plot.plot(kind='bar', figsize=(13, 7))
plt.xlabel('Gender')
plt.ylabel('Purchase')
plt.title('Gender and Purchase Analysis')
plt.xticks(rotation=0)
plt.show()
###Output
_____no_output_____
###Markdown
Interesting from the bivariate analysis we find that occupation, age, and gender only lead to minor variations in purchases Correlation matrix
###Code
corr = df.corr()
plt.figure(figsize=(14, 7))
sns.heatmap(corr, annot=True, cmap='coolwarm')
###Output
_____no_output_____
###Markdown
From the correlation heatmap, we can observe that the dependent feature ‘Purchase’ is highly correlated with ‘Product_Category_1’ and ‘Product_Category_2’ and hence these should be considered important features of the dataset. 3. Feature Engineering Decisions```Partners must describe the following:What feature engineering was performed?What features were selected for use in the ML model and why?Evidence must include a description (in the Whitepaper) of the feature engineering performed (and rationale for the same), what original and engineered features were selected for incorporation as independent predictors in the ML model, and why. Evidence must include code snippets detailing the feature engineering and feature selection steps.``` From the assesement of distributions and plots shown in section 2. The following features were used in the development and training of the models:- 'Product_ID',- 'Gender',- 'Age',- 'Occupation',- 'City_Category', - 'Stay_In_Current_City_Years',- 'Marital_Status',- 'Product_Category_1',- 'Product_Category_2',- 'Purchase'While the following categories were omitted:- User ID- Product Category 3User ID was omitted as it was an arbitrary identifer. Product category 3 was omitted as it was shown in section 2 to have little correlation with the Purchase Amount 4. Preprocessing Pipeline```The partner must:describe the data preprocessing pipeline, and how this is accomplished via a package/function that is a callable API (that is ultimately accessed by the served, production model).Evidence must include a description (in the Whitepaper) of how data preprocessing is accomplished, along with the code snippet that accomplishes data preprocessing as a callable API.```Neural networks work better with one-hot encoded data compared to decision trees, which work better with value encoded categories. Two datasets have been made for each algorithm.During the preprocessing stage, the categorical columns need to be converted to type 'category' in pandas. These columns are *Gender, Stay_In_Current_City_Years, City_Category, Product_Category_1, Product_Category_2, Product_ID*. This allows machine learning libraries to handle the data implicitely. Tensorflow still requires the categories to be encoded. There are multiple ways to encode this data. You can use SKlearn's encoding functionality however I have decided to use Pandas to reduce the number of libraries required.
###Code
# Check for null values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
To One-Hot encode the categorical data we can use Pandas pd.get_dummies() function.
###Code
df['Gender'] = df['Gender'].astype('category')
df_gender = pd.get_dummies(df['Gender'])
df_gender.head()
###Output
_____no_output_____
###Markdown
.cat.codes returns an integer representing the category. These codes are only available if the column type is set to *category*.
###Code
df['Age'] = df['Age'].astype('category').cat.codes
df.head()
df['Stay_In_Current_City_Years'] = df['Stay_In_Current_City_Years'].astype('category')
df_stay_in_currunt_city_years = pd.get_dummies(df['Stay_In_Current_City_Years'])
df_stay_in_currunt_city_years.head()
df['City_Category'] = df['City_Category'].astype('category')
df_city_category = pd.get_dummies(df['City_Category'])
df_city_category.head()
df['Product_Category_1'] = df['Product_Category_1'].astype('category')
df['Product_Category_1'] = df['Product_Category_1'].cat.codes
df.head()
###Output
_____no_output_____
###Markdown
Since *Product_Category_2* has a number of null values we can either remove them from the dataset or replace with a value. This value is often the columns mean or medium value. When analysising the dataset you will find that each Product_ID has the same product categories. Product_Category_2 and Product_Category_3 are both subcategories and therefore may not exist. Since the product category 0 does not exist we will use value 0 to indicate this. Other values such as -1, -2 etc are also appropriate.
###Code
df['Product_Category_2'] = df['Product_Category_2'].fillna(value=0)
df['Product_Category_2'].head()
df['Product_Category_2'] = df['Product_Category_2'].astype('category')
df['Product_Category_2'] = df['Product_Category_2'].cat.codes
df.head()
df['Product_ID'] = df['Product_ID'].astype('category')
df['Product_ID'] = df['Product_ID'].cat.codes
df.head()
df_label_encoded = df.copy()
df_label_encoded['Gender'] = df_label_encoded['Gender'].cat.codes
df_label_encoded['Stay_In_Current_City_Years'] = df_label_encoded['Stay_In_Current_City_Years'].cat.codes
df_label_encoded['City_Category'] = df_label_encoded['City_Category'].cat.codes
df_label_encoded = df_label_encoded.drop(columns=['User_ID', 'Product_Category_3'])
df_label_encoded.head()
df_one_hot_encoded = pd.concat([df, df_gender, df_city_category, df_stay_in_currunt_city_years], axis=1)
df_one_hot_encoded = df_one_hot_encoded.drop(columns=['User_ID', 'Gender', 'City_Category', 'Stay_In_Current_City_Years', 'Product_Category_3', 'Purchase'])
df_one_hot_encoded.head()
###Output
_____no_output_____
###Markdown
Input split for recommendation system
###Code
X = df_label_encoded.drop(columns=['Purchase'])
y = df['Purchase']
###Output
_____no_output_____
###Markdown
Splitting the dataSplitting the dataset into training set and test sets. We will use 80% for training and 20% for testing. The random_state variable allows for repeatability.
###Code
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import mean_squared_error
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
5. Model Selection```Partners must describe the following:- Which ML model/algorithm(s) were chosen for demo 2?- What criteria were used for ML model selection?Evidence must describe (in the Whitepaper) selection criteria implemented, as well as the specific ML model algorithms that were selected for training and evaluation purposes. Code snippets detailing the model design and selection steps must be enumerated.``` To select an appropriate model we conduct an intial trade study between a variety of difference models. These vary in representation power, complexity and explainability:1. Linear Regression2. Decision Trees3. Random Forrest4. ExtraTreesRegressor5. Decision Forrests6. Artifical Neural Networks (ANN)7. ANNs with Dropout 5.1 Linear Regression
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
model.fit(X_train, y_train)
# predict the results
pred = model.predict(X_test)
# Cross validation
cv_score = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=5) # 5 folds
cv_score = np.abs(np.mean(cv_score))
print('Results')
print('MSE:', mean_squared_error(y_test, pred))
print('CV Score:', cv_score)
coef = pd.Series(model.coef_, X_train.columns).sort_values()
coef.plot(kind='bar', title='Model Coefficients')
###Output
_____no_output_____
###Markdown
5.2 DecisionTreeRegressor
###Code
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor()
model.fit(X_train, y_train)
# predict the results
pred = model.predict(X_test)
# Cross validation
cv_score = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=5) # 5 folds
cv_score = np.abs(np.mean(cv_score))
print('Results')
print('MSE:', mean_squared_error(y_test, pred))
print('CV Score:', cv_score)
features = pd.Series(model.feature_importances_, X_train.columns).sort_values(ascending=False)
features.plot(kind='bar', title='Feature Importance')
###Output
_____no_output_____
###Markdown
5.3 RandomForestRegressor
###Code
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_jobs=-1)
model.fit(X_train, y_train)
# predict the results
pred = model.predict(X_test)
# Cross validation
cv_score = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=5) # 5 folds
cv_score = np.abs(np.mean(cv_score))
print('Results')
print('MSE:', mean_squared_error(y_test, pred))
print('CV Score:', cv_score)
features = pd.Series(model.feature_importances_, X_train.columns).sort_values(ascending=False)
features.plot(kind='bar', title='Feature Importance')
###Output
_____no_output_____
###Markdown
5.4 ExtraTreesRegressor
###Code
from sklearn.ensemble import ExtraTreesRegressor
model = ExtraTreesRegressor(n_jobs=-1)
model.fit(X_train, y_train)
# predict the results
pred = model.predict(X_test)
# Cross validation
cv_score = cross_val_score(model, X_train, y_train, scoring='neg_mean_squared_error', cv=5) # 5 folds
cv_score = np.abs(np.mean(cv_score))
print('Results')
print('MSE:', mean_squared_error(y_test, pred))
print('CV Score:', cv_score)
features = pd.Series(model.feature_importances_, X_train.columns).sort_values(ascending=False)
features.plot(kind='bar', title='Feature Importance')
###Output
_____no_output_____
###Markdown
5.5 Decision Forests
###Code
!pip install tensorflow_decision_forests
X = df_label_encoded
y = X['Purchase']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train.head()
import tensorflow_decision_forests as tfdf
# Convert pandas dataset to tf dataset
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(X_train, label='Purchase', task=tfdf.keras.Task.REGRESSION)
tfdf_model = tfdf.keras.RandomForestModel(task=tfdf.keras.Task.REGRESSION)
tfdf_model.fit(train_ds)
tfdf_model.summary()
tensorflow_forest_path = os.path.join("/content/drive/MyDrive/Black Friday Data/", "20220130-085959_tf_forest")
tf.saved_model.save(tfdf_model, tensorflow_forest_path)
###Output
_____no_output_____
###Markdown
5.6 Artificial Neural Network
###Code
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
#plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [Purchase]')
plt.legend()
plt.grid(True)
X = df_one_hot_encoded.drop(columns=['Purchase'])
y = df['Purchase']
import tensorflow as tf
tf.__version__
ann = tf.keras.models.Sequential()
# Add input and first hidden layer
ann.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Add a second hidden layer
ann.add(tf.keras.layers.Dense(units=256, activation='relu'))
# Add a third hidden layer
ann.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Add an output layer
ann.add(tf.keras.layers.Dense(units=1))
ann.compile(optimizer=tf.optimizers.Adam(learning_rate=0.0005), loss='mean_squared_error')
###Output
_____no_output_____
###Markdown
If we add split validation then we dont need to split the dataset before and gives us a cooler graph
###Code
history = ann.fit(X, y, epochs=150, batch_size=32, validation_split=0.2)
# Save model
tf.saved_model.save(ann, export_path)
plot_loss(history)
dnn_results = ann.evaluate(
X_test, y_test, verbose=0)
dnn_results
test_predictions = ann.predict(X_test).flatten()
results = pd.DataFrame(test_predictions)
results['actual'] = pd.DataFrame(y_test)
results.head()
y_test.head()
results['ratio'] = results.apply(lambda row: row['dnn_128'] / row['actual'])
results['ratio'].mean()
###Output
_____no_output_____
###Markdown
5.7 Artificial Neural Network w/ Dropout
###Code
dnn_with_dropout_path = os.path.join("/content/drive/MyDrive/Black Friday Data/", "20220120-085959_dnn_dropout")
dnn_with_dropout = tf.keras.models.Sequential()
# Add input and first hidden layer
dnn_with_dropout.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Add a second hidden layer
dnn_with_dropout.add(tf.keras.layers.Dense(units=256, activation='relu'))
# Dropout layer
dnn_with_dropout.add(tf.keras.layers.Dropout(0.2))
# Add a third hidden layer
dnn_with_dropout.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Add an output layer
dnn_with_dropout.add(tf.keras.layers.Dense(units=1))
dnn_with_dropout.compile(optimizer=tf.optimizers.Adam(learning_rate=0.0005), loss='mean_squared_error')
with tf.device('/device:GPU:0'):
history = dnn_with_dropout.fit(X, y, epochs=150, batch_size=32, validation_split=0.2)
# Save model
tf.saved_model.save(dnn_with_dropout, dnn_with_dropout_path)
plot_loss(history)
###Output
_____no_output_____
###Markdown
Deleting unnecessary columnsThis columns are secified in the documentation that they are not part of the original data.
###Code
dropColumns = ['Naive_Bayes_Classifier_Attrition_Flag_Card_Category_Contacts_Count_12_mon_Dependent_count_Education_Level_Months_Inactive_12_mon_1',
'Naive_Bayes_Classifier_Attrition_Flag_Card_Category_Contacts_Count_12_mon_Dependent_count_Education_Level_Months_Inactive_12_mon_2']
df = df_original.drop(columns=dropColumns)
df.head()
df.shape
df.describe()
###Output
_____no_output_____
###Markdown
Exploring null values
###Code
df.isnull().sum()
df.loc[df['Customer_Age'] == 0, ['Customer_Age']].sum()
df['Education_Level'].unique()
df.loc[df['Education_Level'] == 'Unknown', ['Education_Level']].shape[0] / df.shape[0]
###Output
_____no_output_____
###Markdown
It appears that the null values in the data are notated as 'Unknown'. Therefore, let's make a function which detects and count the number of nulls in this df.
###Code
def null_detector(df, column):
return df.loc[df[column] == 'Unknown', [column]].shape[0] / df.shape[0]
null_detector(df, 'Education_Level')
for column in df.columns:
print(column + "=" + str(null_detector(df, column)))
###Output
CLIENTNUM=0.0
Attrition_Flag=0.0
Customer_Age=0.0
Gender=0.0
Dependent_count=0.0
Education_Level=0.14999506270366347
Marital_Status=0.07396069912116125
Income_Category=0.10980547052434086
Card_Category=0.0
Months_on_book=0.0
Total_Relationship_Count=0.0
Months_Inactive_12_mon=0.0
Contacts_Count_12_mon=0.0
Credit_Limit=0.0
Total_Revolving_Bal=0.0
Avg_Open_To_Buy=0.0
Total_Amt_Chng_Q4_Q1=0.0
Total_Trans_Amt=0.0
Total_Trans_Ct=0.0
Total_Ct_Chng_Q4_Q1=0.0
Avg_Utilization_Ratio=0.0
###Markdown
We can obseve that there are three columns which have null values. In this case we can try and consider the nulls as part of the data.
###Code
df['Income_Category'].unique()
###Output
_____no_output_____
###Markdown
Predicting HI, Velocity and Temperature from Flux
###Code
import numpy as np
import matplotlib.pyplot as plt
xs = np.load("data_xs.npy")
ys = np.load("data_ys.npy")
print("xs shape:", xs.shape)
print("ys shape:", ys.shape)
###Output
xs shape: (6000, 2048, 1)
ys shape: (6000, 2048, 3)
###Markdown
The data is unscaled or transformed in any way, so you will likely want to do some transformations before training.I have plotted a single sample below which includes some scaling.
###Code
f, (ax1, ax2) = plt.subplots(
ncols=2,
tight_layout=True,
figsize=(10, 5)
)
f.suptitle("A single data sample")
ax1.set_title("Input to Model")
ax1.plot(xs[0,...], label="Flux")
ax1.legend(frameon=False)
ax2.set_title("Expected Output")
ax2.plot(np.log10(ys[0,:,0]), label="HI")
ax2.plot(ys[0,:,1]/100, label="Velocity")
ax2.plot(np.log10(ys[0,:,2]), label="Temperature")
ax2.legend(frameon=False)
###Output
_____no_output_____
###Markdown
Research moduleThis notebook is used to explore dataset provided from Kaggle, explore possible features and try out basic approaches on our model
###Code
import os
import math
import numpy as np
import pandas as pd
import sys
import matplotlib.pyplot as plt
import pickle
from util import get_trained_model, preprocess_data
###Output
_____no_output_____
###Markdown
Loading the training data and showing first 10 entries
###Code
train_data = pd.read_table("data/training_set.tsv")
train_data = preprocess_data(train_data)
train_data.hist(column='avg_score', bins=24, ax= plt.figure(figsize = (12,5)).gca())
train_data.head()
###Output
/home/marin/Documents/Faks/3. semestar/Diplomski projekt/Egrader/util.py:32: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
set_i['avg_score']=set_i[rating_columns].mean(axis=1)
/home/marin/Documents/Faks/3. semestar/Diplomski projekt/Egrader/util.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
set_i['avg_score']=((set_i[rating_columns]-set_i[rating_columns].min())/(set_i[rating_columns].max()-set_i[rating_columns].min())).mean(axis=1)
###Markdown
Printing out distribution of essays based on average score
###Code
rating_columns = train_data.columns.values[3:].tolist()
train_data['avg_score']=train_data[rating_columns].mean(axis=1)
train_data[['essay_id', 'essay', 'avg_score']]
train_data.hist(column='avg_score', bins=24, ax= plt.figure(figsize = (12,5)).gca())
###Output
_____no_output_____
###Markdown
Printing out example of essay
###Code
train_data['essay'][0]
###Output
_____no_output_____
###Markdown
Testing out models
###Code
import sklearn.feature_extraction.text as te
import sklearn.model_selection as msel
import nltk
from sklearn import preprocessing
kfold = msel.KFold(random_state=42)
idx_train, idx_test = next(kfold.split(train_data.essay))
tfidf_vectorizer = te.CountVectorizer()
xs = tfidf_vectorizer.fit_transform(train_data.essay.iloc[idx_train])
ys = train_data.avg_score.iloc[idx_train]
xs_test = tfidf_vectorizer.transform(train_data.essay.iloc[idx_test])
ys_test = train_data.avg_score.iloc[idx_test]
xs.shape
min_max_scaler = preprocessing.MinMaxScaler()
ys_scaled = min_max_scaler.fit_transform(ys.values.reshape(-1,1))
ys_test_scaled = min_max_scaler.fit_transform(ys_test.values.reshape(-1,1))
ys_scaled
###Output
_____no_output_____
###Markdown
Simple Linear regression model with Count/tfidf vectorizer
###Code
from sklearn import linear_model
from sklearn import svm
from sklearn import metrics
#classifiers = [
#linear_model.BayesianRidge(),
#linear_model.LassoLars(),
#linear_model.ARDRegression(),
#linear_model.PassiveAggressiveRegressor(),
#linear_model.TheilSenRegressor(),
# linear_model.LinearRegression()
# ]
print(linear_model.LinearRegression(normalize=True))
clf = linear_model.LinearRegression()
clf.fit(xs.toarray(), ys_scaled)
ys_predicted = clf.predict(xs_test.toarray())
ys_predicted_scaled = min_max_scaler.fit_transform(ys_predicted)
print("MSE: {0}".format(metrics.mean_squared_error(ys_test_scaled, ys_predicted_scaled)))
import collections
collections.Counter(" ".join(train_data["essay"]).split()).most_common(100)
###Output
_____no_output_____
###Markdown
Data Exploration in Jupyter
###Code
URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
from urllib.request import urlretrieve
urlretrieve(URL, 'Fremont.csv');
###Output
_____no_output_____
###Markdown
*unix command to see file*
###Code
!head Fremont.csv
###Output
Date,Fremont Bridge Total,Fremont Bridge East Sidewalk,Fremont Bridge West Sidewalk
10/03/2012 12:00:00 AM,13,4,9
10/03/2012 01:00:00 AM,10,4,6
10/03/2012 02:00:00 AM,2,1,1
10/03/2012 03:00:00 AM,5,2,3
10/03/2012 04:00:00 AM,7,6,1
10/03/2012 05:00:00 AM,31,21,10
10/03/2012 06:00:00 AM,155,105,50
10/03/2012 07:00:00 AM,352,257,95
10/03/2012 08:00:00 AM,437,291,146
###Markdown
*Using Pandas to read csv file in data frame*
###Code
import pandas as pd
data = pd.read_csv('Fremont.csv')
data.head()
###Output
_____no_output_____
###Markdown
*index_col - Changing index column from id to Date column* *parse_dates - Converting strings to date*
###Code
data = pd.read_csv('Fremont.csv', index_col='Date', parse_dates=True)
data.head()
###Output
_____no_output_____
###Markdown
*matplotlib inline - command to plot images in the notebook itself not separate windows*
###Code
%matplotlib inline
data.plot();
###Output
_____no_output_____
###Markdown
*Data is dense with to many data points. For a better view we will resample weekly and get the sum to see the total number of rides eache week*
###Code
data.resample('W').sum().plot();
###Output
_____no_output_____
###Markdown
*Now we change the style of the plot and rename data columns to shorthen legends*
###Code
import matplotlib.pyplot as plt
plt.style.use('seaborn')
data.columns = ['Total', 'East','West']
data.resample('W').sum().plot();
###Output
_____no_output_____
###Markdown
*Now we will try to identify any annual trend on the data by resampling the data daily and getting a aggregated sum over 365 days or 1 year. The result shows each data point with the sum of rides on the last 365 days*
###Code
data.resample('D').sum().rolling(365).sum().plot();
###Output
_____no_output_____
###Markdown
*The axes limits are suspect because they are not going all the way to zero, so we forcing ylim from zero to the actual max value*
###Code
ax = data.resample('D').sum().rolling(365).sum().plot();
ax.set_ylim(0,None);
###Output
_____no_output_____
###Markdown
*Adding a new column to the data*
###Code
data['Total'] = data['West']+data['East']
ax = data.resample('D').sum().rolling(365).sum().plot();
ax.set_ylim(0,None);
###Output
_____no_output_____
###Markdown
*Now we are going to take a look at a trend in individual days by using groupby on the time of the day and taking the mean*
###Code
data.groupby(data.index.time).mean().plot();
###Output
_____no_output_____
###Markdown
*Now we want to see the entire dataset in this way by using pivoted tables. We use the total counts indexig on time and showing column date. After that we look at first 5 rows and columns of the result data frame*
###Code
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.iloc[:5,:5]
###Output
_____no_output_____
###Markdown
*Now we plot that data where each row is the time of the day and each column is a date without legend*
###Code
pivoted.plot(legend=False);
###Output
_____no_output_____
###Markdown
*It shows a line for each day of the year. We wiil change transparency to put equal lines on top of each other.*
###Code
pivoted.plot(legend=False, alpha=0.01);
###Output
_____no_output_____
###Markdown
This notebook was used primarily as scratch space to work on manipulating the data before transfering it into .py files
###Code
import pandas as pd
import numpy as np
from random import choices, choice
import datetime
###Output
_____no_output_____
###Markdown
I generated a list of 200 names for the custodians and 80 for the schoolsI need to check that they are all unique then generate a table
###Code
# Check for duplicates in the names to be used for fake data.
names = {}
with open('names.txt', 'rb') as n:
for name in n:
if name in names:
names[name] += 1
else:
names[name] = 1
for name, count in names.items():
if count > 1:
print(name, count)
# Great, all the names are unique
# and check for duplicate schools
schools = {}
with open('schools.txt', 'rb') as s:
for school in s:
if school in schools:
schools[school] += 1
else:
schools[school] = 1
for school, count in schools.items():
if count > 1:
print(school, count)
# Great all unique as well
# On to generating a table.
# build the list of schools
schools = []
with open('schools.txt','r') as s:
for school in s:
schools.append(school.strip(' \xc2\xa0\n'))
# build the list of names
names = []
with open('names.txt', 'r') as n:
for name in n:
names.append(name.strip())
# choices is random sampling with replacement, which is what we want
choices(schools, k=30)
# build the body of the data table
body = [choices(schools, k=30) for _ in names]
body = np.array(body)
pd.DataFrame(body.T, columns=names)
# Good that gets all the data populated, now I just need dates.
numdays = 30
base = datetime.datetime.today().date()
date_list = [base - datetime.timedelta(days=x) for x in range(numdays)]
date_list
data = pd.DataFrame(body.T, index=date_list, columns=names)
# I actually want the data to be flipped the other way
data.to_csv('toyData.csv')
# From here I can start to build the code out.
###Output
_____no_output_____
###Markdown
Plan:1. dataframe filter only the last two weeks2. itterate through the dataframe, matching anyone at the same school.
###Code
# pretending to start from a csv
data = pd.read_csv('toyData.csv')
# the date that the positive test came in, for now I'll use today but this
# needs to be adjustable in the final product
posdate = base
# how long back before a positive test that we want to search.
# for now we are going with the standard
riskPeriod = 14
# and assign somone to get sick:
sickPerson = choice(names)
# convert the date into datetime
data['Date'] = pd.to_datetime(data['Unnamed: 0'])
# filter the data down to what we want
danger_time = data[data['Date'] > posdate - datetime.timedelta(days = riskPeriod)]
# flip the data because thats how it becomes useful
danger_time = danger_time.T
# generate a list of all contacts
contacts = []
for day in danger_time:
# risk is the name of the school the sick person was at
risk = danger_time.loc[sickPerson][day]
# filter danger time for the list of people who were there
risky_contact = danger_time[danger_time[day] == risk].index.values
# save those to a list
contacts.append(list(risky_contact))
# flatten those contacts into a set so no repeated names
dangers = set()
for day in contacts:
for contact in day:
dangers.add(contact)
dangers
###Output
_____no_output_____
###Markdown
That code works in a very MVP kind of way This does not take into account that there are three shifts of people that need to be seperately taken into account
###Code
# shifts are 'A', 'M', and 'P'
# 'M' puts both at high risk, othewise thers a high med low pattern:
# shift risks has key of sick persons shift, value of risks to other shifts
shift_risks = {'A': {'A': 'red', 'M': 'yellow', 'P': 'green'},
'M': {'A': 'red', 'M': 'red', 'P': 'red'},
'P': {'A': 'green', 'M': 'yellow', 'P': 'green'}}
# I also need to generate shifts for everyone
shift = {name:choice(['A', 'M', 'P']) for name in names}
shift['Unnamed: 0'] = None
shift['Date'] = None
# add shifts for second round
data = data.append(shift, ignore_index = True)
data.to_csv('data2.csv')
data2.index
data = pd.read_csv('data2.csv')
# clean up the csv a little
data = data.drop('Unnamed: 0', axis = 1).drop('Date', axis=1)
data = data.rename({'Unnamed: 0.1': 'Date'}, axis=1)
data.to_csv('data2.csv')
###Output
_____no_output_____
###Markdown
And Now with Shifts!now I need to do the same thing as before but with the addition of adding a danger color level.
###Code
# start by pretending to read in data:
data = pd.read_csv('data2.csv')
# same as above
# the date that the positive test came in, for now I'll use today but this
# needs to be adjustable in the final product
posdate = base
# how long back before a positive test that we want to search.
# for now we are going with the standard
riskPeriod = 14
# and assign somone to get sick:
sickPerson = choice(names)
# convert the date into datetime
data['Date'] = pd.to_datetime(data['Date'])
# get rid of the reduntant indexes
data = data.drop('Unnamed: 0', axis = 1)
# extract what shift everyone is on, its easier to just cache this knowledge
shifts = dict(data.loc[0])
# filter the data down to what we want
danger_time = data[data['Date'] > posdate - datetime.timedelta(days = riskPeriod)]
# flip the data because thats how it becomes useful
danger_time = danger_time.T
# danger_shift is a dict of key shift value risk color
danger_shift = shift_risks[shifts[sickPerson]]
# generate a list of all contacts
contacts = []
for day in danger_time:
# risk is the name of the school the sick person was at
risk = danger_time.loc[sickPerson][day]
# filter danger time for the list of people who were there
risky_contact = danger_time[danger_time[day] == risk].index.values
# save those to a list
contacts.append(list(risky_contact))
# flatten those contacts into a set so no repeated names
dangers = set()
for day in contacts:
for contact in day:
dangers.add((contact, danger_shift[shifts[contact]]))
dangers, shifts[sickPerson]
# well that was easier than I expected it to be. Now I just need to
# convert this to a script that can run as an executable
###Output
_____no_output_____
###Markdown
Data Exploration
###Code
s3_data ="s3://aegovan-data/human_output/human_interactions_ppi_v2.json"
s3_annotations ="s3://aegovan-data/processed_dataset/input_data_pubtator_annotated_human.txt"
s3_results_prefix = "s3://aegovan-data/processed_dataset/"
human_idmapping_dat = "./data/HUMAN_9606_idmapping.dat"
idmapping_dat="./tmpmap.dat"
!cp $human_idmapping_dat $idmapping_dat
!wc -l $idmapping_dat
import logging, sys
# Set up logging
logging.basicConfig(level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)],
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
import boto3
def download_single_file(bucket_name_path, local_path):
index = bucket_name_path.find("://")
# remove the s3:// if present
if index > -1:
bucket_name_path = bucket_name_path[index + 3:]
key_start_index = bucket_name_path.find("/")
bucket_name = bucket_name_path
key = "/"
if key_start_index > -1:
bucket_name = bucket_name_path[0:key_start_index]
key = bucket_name_path[key_start_index + 1:]
client = boto3.resource('s3')
client.Bucket(bucket_name).download_file(key, local_path)
data_file="input_data.json"
annotations_file="input_data_annotations.txt"
download_single_file(s3_data, data_file)
download_single_file(s3_annotations, annotations_file)
import pandas as pd
data = pd.read_json(data_file)
print("Total number of records: {}".format(data.shape[0]))
data.pubmedId.nunique()
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', 10000)
pd.set_option('display.max_rows', 100)
import matplotlib.pyplot as plt
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = ['Times']
plt.rcParams.update({'font.size': 12})
###Output
_____no_output_____
###Markdown
Preliminary data transformations
###Code
#TODO: Fix data format
data["pubmedId"] = data["pubmedId"].astype(str)
data["interactionId"] = data["interactionId"].astype(str)
data["isValid"] = data.isNegative.isin(['false', '0', 'False'])
data = data.drop('isNegative', axis=1)
###Output
_____no_output_____
###Markdown
Sneak preview of the data
###Code
data.head(n=3)
data.shape
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(100 * y)
# The percent symbol needs escaping in latex
if matplotlib.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
###Output
_____no_output_____
###Markdown
Duplicate interactions
###Code
def flat_participants_list(list_of_uniprot_dict):
return frozenset([item["uniprotid"] for item in list_of_uniprot_dict])
data["flatparticpants"]= data["participants"].apply(flat_participants_list)
data.groupby(["pubmedId", "flatparticpants", "interactionType"])\
.filter(lambda x: len(x) > 1)\
.groupby(["pubmedId", "flatparticpants", "interactionType"])\
.size()\
.sort_values(ascending=False)
data.query("pubmedId=='23560844'")[["pubmedId", "flatparticpants", "interactionType","interactionId"]]\
.sort_values(by="interactionId")
###Output
_____no_output_____
###Markdown
Number of interactions per paper**Note: The number of interactions per paper only takes the filtered interactions extracted from the Intact database**
###Code
import matplotlib.pyplot as plt
import numpy as np
ax = plt.axes( yscale='log')
ax.xaxis.set_major_locator(plt.MaxNLocator(10, prune='lower'))
#sns.distplot(data.pubmedId.value_counts().tolist(), bins=100, kde=False, norm_hist=True)
data.pubmedId.value_counts().plot.hist (bins=250,figsize=(10,5), ax=ax, color='dodgerblue')
plt.title('Histogram - number of interactions per pubmed')
plt.xlabel('Number of interactions per Pubmed paper')
plt.ylabel('Frequency')
#plt.show()
plt.savefig('PaperVsInteractions.eps', bbox_inches='tight')
plt.savefig('PaperVsInteractions.png', bbox_inches='tight')
plt.show()
df = data.pubmedId.value_counts().hist (bins=range(1, 30), figsize=(15,5), color = 'red')
plt.title('Papers vs number of interactions distribution ( Filtered distribution of interactions between 1 to 30)')
plt.xlabel('Number of interactions per paper')
plt.ylabel('Total number of papers')
plt.show()
###Output
_____no_output_____
###Markdown
Interaction Types distribution
###Code
data.interactionType.value_counts().plot.pie(autopct='%.2f',figsize=(8, 8))
plt.title('Interaction Type Distribution')
plt.savefig("Interactiontype.svg")
plt.show()
data.interactionType.value_counts().to_frame()
###Output
_____no_output_____
###Markdown
Distinct interaction types per paper
###Code
import numpy as np
distinct_no_papers = data['pubmedId'].nunique()
data.groupby('pubmedId')['interactionType'].nunique().hist(bins=100, density=1)
plt.title("Number of unique interaction types per paper")
plt.xlabel('Number of unique interaction types')
plt.ylabel('Percentage of Pubmed papers'.format(distinct_no_papers))
plt.show()
###Output
_____no_output_____
###Markdown
Postive vs Negative Relationships
###Code
data.isValid.value_counts().plot.pie(autopct='%.2f',figsize=(5, 5))
plt.title('Is Valid relationship')
plt.show()
###Output
_____no_output_____
###Markdown
Number of participants per interaction
###Code
import numpy as np
import matplotlib.ticker as mtick
fig, ax = plt.subplots( 1,1, figsize=(15,5))
#fig, ax = plt.subplots( 7,1, figsize=(45,30))
c_ax= ax
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
data['participants_count'] = data["participants"].apply(lambda x: len(x))
data['participants_count'].hist (bins=50, ax=c_ax, figsize=(5,5), color = 'dodgerblue', weights = np.ones_like(data['participants_count'].index)*100 / len(data['participants_count'].index))
plt.title("Participants count per interaction")
plt.xlabel('Number of participants per interaction')
plt.ylabel('Percentage of interactions')
plt.savefig("ParticipantsPerInteraction.eps")
plt.show()
###Output
_____no_output_____
###Markdown
Explore if the abstract contains the trigger word
###Code
!pip install nltk==3.4.5
from nltk.stem import PorterStemmer
from nltk.tokenize import sent_tokenize, word_tokenize
stemmer = PorterStemmer()
print(pd.DataFrame(data.interactionType.unique()).apply(lambda r: stemmer.stem(r.iloc[0].lower()), axis=1))
data["hasTriggerWord"] = data.apply(lambda r: stemmer.stem(r["interactionType"].lower()) in r["pubmedabstract"].lower() , 1)
data.hasTriggerWord.value_counts().plot.pie(autopct='%.2f',figsize=(5, 5))
plt.title('Has trigger word')
plt.show()
data.groupby([ 'interactionType','hasTriggerWord']).size().unstack().apply(lambda x: round(x/sum(x),3)*100, axis=1)
###Output
_____no_output_____
###Markdown
Explore how many of the entity alias are mentioned in the abstract
###Code
%%time
from difflib import SequenceMatcher
def getEntityMentionsCount(r):
count = 0
abstract = r["pubmedabstract"].lower()
abstract_len= len(abstract)
for p in r["participants"]:
if p is None or p['alias'] is None : continue
for a in p['alias']:
alias = a[0].lower()
s = SequenceMatcher(None, abstract, alias)
_,_,match_size = s.find_longest_match(0, len(abstract), 0, len(alias))
if match_size >= 3 and match_size >= len(alias)/2 :
count += 1
return count
data["entityMentionsCount"] = data.apply(lambda r: getEntityMentionsCount(r) , 1)
data['entityMentionsCount'].hist ( bins=150, figsize=(15,5), color = 'red')
plt.title('Entity mentions count distribution')
plt.xlabel('Entity name mentions count in abstract')
plt.ylabel('Total number of interactions')
plt.show()
(data['entityMentionsCount'] > 0).value_counts().plot.pie(autopct='%.2f')
plt.title("Percentage of interactions with entity mentions ")
plt.ylabel("Entity mentions > 0")
plt.show()
###Output
_____no_output_____
###Markdown
Randomly eye ball interactions with no entity mentions
###Code
data.query('entityMentionsCount == 0')[['participants','pubmedabstract' ]].sample(n=3)
data.head(n=2)
###Output
_____no_output_____
###Markdown
Data Transformation Drop duplicates by ["pubmedId", "flatparticpants", "interactionType"]
###Code
data.shape
process_map =[]
process_map.append({"name": "Initial", "count": len(data) })
filtered = data.drop_duplicates(subset=["pubmedId", "flatparticpants", "interactionType"], keep='last')
process_map.append({"name": "Drop duplicates (pubmedId, participant uniprots, interactionType)",
"count": len(filtered) })
filtered.shape
###Output
_____no_output_____
###Markdown
Filter interactions with participants != 2
###Code
filtered = filtered[~filtered.pubmedId.isin( filtered.query('participants_count > 2').pubmedId)]
process_map.append({"name": "Drop abstracts that have n-ary relationship n > 2",
"count": len(filtered) })
filtered.shape
import matplotlib.ticker as mtick
fig, ax = plt.subplots( 1,1, figsize=(15,5))
c_ax= ax
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
filtered['participants_count'].hist (bins=50, figsize=(5,5), ax=c_ax, color = 'blue', weights = np.ones_like(filtered['participants_count'].index)*100 / len(filtered['participants_count'].index))
plt.title("Participants count per interaction")
plt.xlabel('Number of participants per interaction')
plt.ylabel('Percentage of interactions')
plt.savefig("ParticipantsPerInteraction.eps")
plt.show()
###Output
_____no_output_____
###Markdown
Flatten partcipants into participant 1 and particpiant 2
###Code
from datatransformer.jsonPPIFlattenTransformer import IntactJsonPpiFlattenTransformer
sut = IntactJsonPpiFlattenTransformer()
data_transformed = sut.transform(filtered)
data_transformed.head(n=2)
data_transformed.shape
###Output
_____no_output_____
###Markdown
Remove records where the participantId is null
###Code
data_transformed.shape
data_filtered = data_transformed[data_transformed.participant1Id.notnull() & data_transformed.participant2Id.notnull() ]
process_map.append({"name": "Drop interactions where participant Unitprot identifiers are null",
"count": len(data_filtered) })
data_filtered.shape
data_filtered.head(n=2)
###Output
_____no_output_____
###Markdown
Normalise abstract
###Code
def normalise_absract(data, enity_annotations_file):
from datatransformer.abstractGeneNormaliser import AbstractGeneNormaliser
from datatransformer.ncbiGeneUniprotLocalDbMapper import NcbiGeneUniprotLocalDbMapper
from datatransformer.ncbiGeneUniprotMapper import NcbiGeneUniprotMapper
from dataformatters.gnormplusPubtatorReader import GnormplusPubtatorReader
from datatransformer.textGeneNormaliser import TextGeneNormaliser
import os
localdb = idmapping_dat
with open(localdb, "r") as dbhandle:
mapper = NcbiGeneUniprotLocalDbMapper(dbhandle, "GeneID")
#Read gnormplus identified entities
reader = GnormplusPubtatorReader()
with open(enity_annotations_file,"r") as handle:
annotations_json = list(reader(handle))
#
normaliser = AbstractGeneNormaliser(annotations_json)
normaliser.text_gene_normaliser = TextGeneNormaliser(geneIdConverter = mapper)
result = normaliser.transform(data)
return result
%%time
data_filtered = normalise_absract(data_filtered.copy(deep=True), annotations_file)
data_filtered.shape
data_filtered.head(n=3)
data_filtered.query("interactionType == 'acetylation'").shape
fig, ax = plt.subplots( 1,4, figsize=(15,5))
tmp = pd.DataFrame()
data_filtered["particpant1Exists"] = data_filtered.apply(lambda r: r["participant1Id"] in r["normalised_abstract"] , 1)
data_filtered["particpant1Exists"].value_counts().plot.pie(ax=ax[0], autopct='%.2f')
data_filtered["particpant2Exists"] = data_filtered.apply(lambda r: r["participant2Id"] in r["normalised_abstract"] , 1)
data_filtered["particpant2Exists"].value_counts().plot.pie(ax=ax[1], autopct='%.2f')
data_filtered["bothParticpantsExist"] = data_filtered.apply(lambda r: r["particpant2Exists"] and r["particpant1Exists"] , 1)
data_filtered["bothParticpantsExist"].value_counts().plot.pie(ax=ax[2], autopct='%.2f')
data_filtered["noParticpantsExist"] = data_filtered.apply(lambda r: not (r["particpant2Exists"] or r["particpant1Exists"]) , 1)
data_filtered["noParticpantsExist"].value_counts().plot.pie(ax=ax[3], autopct='%.2f')
fig, ax = plt.subplots(1,7, figsize=(20,5))
data_filtered.groupby([ "bothParticpantsExist", 'interactionType']).size().unstack().plot.bar(subplots=True, ax=ax)
plt.show()
data_filtered.query("particpant2Exists == False").sample(4)
###Output
_____no_output_____
###Markdown
Remove abstracts where both participants do not exists
###Code
data_filtered = data_filtered.query('bothParticpantsExist == True')
process_map.append({"name": "Drop interactions where the participant UniprotID does not exist in abstract",
"count": len(data_filtered) })
data_filtered.shape
data_filtered.query("interactionType == 'acetylation'").shape
process_map
###Output
_____no_output_____
###Markdown
Drop PPIs without trigger word
###Code
data_filtered = data_filtered.query('hasTriggerWord == True')
process_map.append({"name": "Drop interactions where the abstract does not contain the trigger word",
"count": len(data_filtered) })
data_filtered.shape
###Output
_____no_output_____
###Markdown
Remove self relations
###Code
data_filtered = data_filtered.query('participant1Id != participant2Id')
process_map.append({"name": "Drop interactions where participant1 = participant2 (self relations)",
"count": len(data_filtered) })
data_filtered.shape
process_map
data_filtered.query("interactionType == 'acetylation'").shape
fig, ax = plt.subplots( 1,1, figsize=(4,5))
c_ax= ax
c_ax.set_title('Overall distribution post filter {}'.format(data_filtered.shape[0]))
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 10)))
data_filtered.groupby(['interactionType']).size().apply(lambda x: 100 * x / float(len(data_filtered.interactionType))).plot.bar(ax=c_ax, color='gray')
plt.savefig("Interactiontype_postfilter.eps", bbox_inches='tight')
plt.savefig("Interactiontype_postfilter.png", bbox_inches='tight')
plt.show()
data_filtered.groupby(['interactionType']).size()
###Output
_____no_output_____
###Markdown
Check how many contain the trigger word
###Code
data_filtered.groupby([ 'interactionType','hasTriggerWord']).size().unstack(fill_value = 0)
data_filtered.groupby([ 'interactionType','hasTriggerWord']).size().unstack(fill_value = 0).apply(lambda x: round(x/sum(x),3)*100, axis=1)
data_filtered.query('pubmedId == "17126281"')
data_filtered.query("interactionType == 'acetylation'")[["interactionType", "pubmedId", "pubmedTitle",
"participant1Id", "participant2Id" ]]
###Output
_____no_output_____
###Markdown
Verify no duplicates
###Code
duplicates = data_filtered.groupby(["interactionType", "pubmedId", "participant1Id", "participant2Id"])\
.filter(lambda x: len(x) > 1)\
.groupby(["interactionType", "pubmedId", "participant1Id", "participant2Id"]).size()
assert len(duplicates)==0
process_map
###Output
_____no_output_____
###Markdown
Split Train/Test/validation
###Code
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
unique_pubmed = data_filtered.pubmedId.unique()
stratified = [ data_filtered.query("pubmedId == '{}'".format(p))['interactionType'].iloc[0] for p in unique_pubmed]
trainpubmed, valpubmed = train_test_split(unique_pubmed, test_size=.1,
random_state=777, stratify=stratified)
stratified = [data_filtered.query("pubmedId == '{}'".format(p))['interactionType'].iloc[0] for p in trainpubmed]
trainpubmed, testpubmed = train_test_split(trainpubmed, test_size=.2,
random_state=777, stratify=stratified)
data_filtered.query("interactionType == 'demethylation'")['pubmedId'].unique()
data_filtered.query("interactionType == 'ubiquitination'")['pubmedId'].unique()
train = data_filtered[data_filtered['pubmedId'].isin(trainpubmed)]
test = data_filtered[data_filtered['pubmedId'].isin(testpubmed)]
val = data_filtered[data_filtered['pubmedId'].isin(valpubmed)]
train.query("interactionType == 'ubiquitination'")['pubmedId'].unique()
val.query("interactionType == 'ubiquitination'")
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.ticker as mtick
fig, ax = plt.subplots( 1,3, figsize=(12,5))
#fig, ax = plt.subplots( 7,1, figsize=(45,30))
c_ax= ax[0]
c_ax.set_title('Train set {}'.format(train.shape[0]))
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 5)))
train.groupby(['interactionType']).size().apply(lambda x: 100 * x / float(len(train.interactionType))).plot.bar(ax=c_ax, color='gray')
c_ax = ax[1]
c_ax.set_title('Validation set {}'.format(val.shape[0]))
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 5)))
val.groupby(['interactionType']).size().apply(lambda x: 100 * x / float(len(val.interactionType))).plot.bar(ax=c_ax, color='gray')
c_ax = ax[2]
c_ax.set_title('Test set {}'.format(test.shape[0]))
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 5)))
test.groupby(['interactionType']).size().apply(lambda x: 100 * x / float(len(test.interactionType))).plot.bar(ax=c_ax, color='gray')
plt.savefig("split_dataset_postfilter.eps", bbox_inches='tight')
plt.savefig("split_dataset_postfilter.png", bbox_inches='tight')
plt.show()
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.ticker as mtick
fig, ax = plt.subplots( 1,3, figsize=(15,5))
#fig, ax = plt.subplots( 7,1, figsize=(45,30))
c_ax= ax[0]
c_ax.set_title('Train set total positive class {}'.format(train.shape[0]))
c_ax.yaxis.set_major_locator(plt.MaxNLocator( prune='both'))
train.interactionType.value_counts().sort_index().plot.bar(ax=c_ax, color='gray')
c_ax = ax[1]
c_ax.set_title('Validation set total positive class {}'.format(val.shape[0]))
c_ax.yaxis.set_major_locator(plt.MaxNLocator( prune='both'))
val.interactionType.value_counts().sort_index().plot.bar(ax=c_ax, color='gray')
c_ax = ax[2]
c_ax.set_title('Test set total positive class {}'.format(test.shape[0]))
c_ax.yaxis.set_major_locator(plt.MaxNLocator( prune='both'))
test.interactionType.value_counts().sort_index().plot.bar(ax=c_ax, color='gray')
plt.savefig('TrainTestValidationInteractionDistribution.eps', bbox_inches='tight')
plt.savefig('TrainTestValidationInteractionDistribution.png', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Sample network
###Code
import networkx as nx
import matplotlib.pyplot as plt
import random
random.seed(a=78, version=2)
fig,ax=plt.subplots(1,2, figsize=(18,8))
G=nx.Graph()
# Add nodes and edges
G.add_edges_from(train.query(" participant2Id =='Q5S007' and participant1Id != participant2Id")
.apply(lambda x: ( x["participant1Id"],x["participant2Id"], {"type": x['interactionType']}), axis=1))
pos = nx.spring_layout(G, seed=80)
nx.draw(G, node_color='lightgrey', pos=pos, node_size=1000, with_labels = True, ax=ax[0])
edge_label = nx.get_edge_attributes(G,'type')
colors = {i:random.randint(0, 50) for i in train['interactionType'].unique()}
edge_colors = [ colors[l] for _,l in edge_label.items()]
cmap=plt.cm.get_cmap("rainbow")
vmin = min(edge_colors)
vmax = max(edge_colors)
nx.draw(G, node_color='lightgrey', pos=pos, node_size=1000, with_labels = True, ax=ax[1])
nx.draw_networkx_edges(G, pos, width=1.0, edge_color=edge_colors, edge_cmap=cmap, edge_vmin=vmin, edge_vmax=vmax)
nx.draw_networkx_edge_labels(G, pos=pos,alpha=1, edge_labels = nx.get_edge_attributes(G,'type'), ax=ax[1])
plt.savefig('network.pdf', bbox_inches="tight")
plt.show()
###Output
/Users/aeg/venv/PPI-typed-relation-extractor/lib/python3.7/site-packages/matplotlib/font_manager.py:1238: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Generate negative samples
###Code
# def generate_negative_old(data):
# import uuid
# unique_pubmeds = data["pubmedId"].unique()
# data_fake = pd.DataFrame(columns=data.columns)
# num_fake_records = int( .50 * len(data))
# #TODO: Randomise this, biased via
# for u in unique_pubmeds:
# fake_records = pd.DataFrame(data[ data.pubmedId != u] ).sample(n=1)
# fake_records.loc[:, "interactionId"] = fake_records.interactionId.astype(str) + "_" + str(uuid.uuid4() ) + "_" + "fake"
# fake_records.loc[:,"isValid"] = 'False'
# ## Copy of the pubmeid abtract and the title from a id
# fake_records.loc[:,"pubmedId"] = u
# fake_records.loc[:, "pubmedTitle"] = data[ data.pubmedId == u].iloc[0]["pubmedTitle"]
# fake_records.loc[:, "pubmedabstract"] = data[ data.pubmedId == u].iloc[0]["pubmedabstract"]
# data_fake = data_fake.append(fake_records, ignore_index=True)
# if len(data_fake) > num_fake_records:
# break
# return data_fake
def generate_negative_entity(data, enity_annotations_file):
from dataformatters.gnormplusPubtatorReader import GnormplusPubtatorReader
from datatransformer.gnormplusNegativeSamplesAugmentor import GnormplusNegativeSamplesAugmentor
from datatransformer.ncbiGeneUniprotLocalDbMapper import NcbiGeneUniprotLocalDbMapper
import os
localdb = human_idmapping_dat
with open(localdb, "r") as dbhandle:
mapper = NcbiGeneUniprotLocalDbMapper(dbhandle, "GeneID")
#Read gnormplus identified entities
reader = GnormplusPubtatorReader()
with open(enity_annotations_file,"r") as handle:
annotations_json = list(reader(handle))
negative_samples_generator = GnormplusNegativeSamplesAugmentor(annotations_json, mapper)
result = negative_samples_generator.transform(data)
return result
def generate_negative_interaction(data):
from datatransformer.interactionTypeNegativeSamplesAugmentor import InteractionTypeNegativeSamplesAugmentor
import os
negative_samples_generator = InteractionTypeNegativeSamplesAugmentor()
result = negative_samples_generator.transform(data)
return result
def generate_negative_missing_participant(data):
import os
data['isValid'] = data['isValid'].mask( (data['bothParticpantsExist'] == False) & (data['isValid'] == True) , False)
return data
def plot_negative_distribution(train,val, test, heading, fig, ax ):
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
fig.suptitle(heading)
#fig, ax = plt.subplots( 7,1, figsize=(45,30))
c_ax= ax[0]
c_ax.set_facecolor('xkcd:white')
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.set_title( "Training PPI {}".format( train.shape[0]))
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 10)))
train.groupby(['interactionType', 'isValid']).size().groupby( level=0).apply(lambda x:
100 * x / float(x.sum())).unstack().plot.bar(ax=c_ax, hatch = '/')
c_ax= ax[1]
c_ax.set_facecolor('xkcd:white')
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.set_title( "Validation PPI {}".format(val.shape[0]))
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 10)))
val.groupby(['interactionType', 'isValid']).size().groupby( level=0).apply(lambda x:
100 * x / float(x.sum())).unstack().plot.bar(ax=c_ax, hatch = '/')
c_ax= ax[2]
c_ax.set_facecolor('xkcd:white')
c_ax.yaxis.set_major_formatter(mtick.PercentFormatter())
c_ax.set_title( "Test PPI {}".format(test.shape[0]))
c_ax.yaxis.set_major_locator(plt.FixedLocator(range(0,100, 10)))
test.groupby(['interactionType', 'isValid']).size().groupby( level=0).apply(lambda x:
100 * x / float(x.sum())).unstack().plot.bar(ax=c_ax, hatch = '/')
###Output
_____no_output_____
###Markdown
Step1: Add negative entity pairs
###Code
train = generate_negative_entity(train, annotations_file)
test = generate_negative_entity(test, annotations_file)
val = generate_negative_entity(val, annotations_file)
fig, ax = plt.subplots( 1,3, figsize=(15,5))
plt.style.use('grayscale')
plot_negative_distribution(train, val, test, "Distribution after adding negative entity pairs",fig, ax)
plt.savefig("EntityNegativeSample.eps",bbox_inches = "tight")
plt.savefig("EntityNegativeSample.svg",bbox_inches = "tight")
plt.show()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,3, figsize=(15,20))
ax[0].set_title('Train class distribution: negative entities only')
train.isValid.value_counts().plot.pie(autopct='%.2f', ax=ax[0])
ax[1].set_title('Validation class distribution: negative entities only')
val.isValid.value_counts().plot.pie(autopct='%.2f', ax=ax[1])
ax[2].set_title('Test class distribution: negative entities only')
test.isValid.value_counts().plot.pie(autopct='%.2f', ax=ax[2])
plt.savefig("PositiveVsNegative_EntityOnly.png")
train_file ="train_unique_negative_entity_only.json"
train.to_json(train_file)
test_file ="test_unique_negative_entity_only.json"
test.to_json(test_file)
val_file = "val_unique_negative_entity_only.json"
val.to_json(val_file)
from helpers.s3_util import S3Util
S3Util().uploadfile(train_file, "{}/".format( s3_results_prefix.rstrip("/")) )
S3Util().uploadfile(test_file, "{}/".format( s3_results_prefix.rstrip("/")) )
S3Util().uploadfile(val_file, "{}/".format( s3_results_prefix.rstrip("/")) )
train.groupby(['interactionType', 'isValid']).size().unstack()
val.groupby(['interactionType', 'isValid']).size().unstack()
test.groupby(['interactionType', 'isValid']).size().unstack()
pd.DataFrame(train.groupby(['interactionType', 'isValid']).size().unstack())
t = pd.DataFrame(train.groupby(['interactionType', 'isValid']).size().unstack())
t.columns =["False", "True"]
v = pd.DataFrame(val.groupby(['interactionType', 'isValid']).size().unstack())
v.columns = ["False", "True"]
b = pd.DataFrame(test.groupby(['interactionType', 'isValid']).size().unstack())
b.columns = ["False", "True"]
m = t.merge(v, left_index = True, right_index=True, how="left", suffixes=('_train', '_val'))\
.merge(b, left_index = True, right_index=True, how="left")\
m=m.fillna(0)
m.loc["Total"] = m.apply(lambda x: sum(x))
m["TotalFalse"] = m.apply(lambda x: sum( [ v for k,v in x.items() if 'false' in k.lower()]), axis=1)
m["TotalTrue"] = m.apply(lambda x: sum( [ v for k,v in x.items() if 'true' in k.lower()]), axis=1)
print(m.astype('int32').to_latex())
feature_cols = ["pubmedId","pubmedabstract","annotations", "num_unique_gene_normalised_id", "num_gene_normalised_id", "normalised_abstract","normalised_abstract_annotations", "participant1Id", "participant2Id", "gene_to_uniprot_map", "participant1Name", "participant2Name"]
derive_class_func = lambda r: r["interactionType"] if r["isValid"] else "other"
train_multiclass = train[ feature_cols]
train_multiclass["class"] = train.apply( derive_class_func, axis=1)
test_multiclass = test[ feature_cols]
test_multiclass["class"] = test.apply( derive_class_func, axis=1)
val_multiclass = val[ feature_cols]
val_multiclass["class"] = val.apply( derive_class_func, axis=1)
train_multiclass["class"].value_counts()
test_multiclass["class"].value_counts()
val_multiclass["class"].value_counts()
train_multi_file="train_multiclass.json"
train_multiclass.to_json(train_multi_file)
test_multi_file="test_multiclass.json"
test_multiclass.to_json(test_multi_file)
val_multi_file="val_multiclass.json"
val_multiclass.to_json(val_multi_file)
val_multiclass.head(n=1)
val_multiclass.query("`class` == 'other'" ).sample(n=5)
val_multiclass.query("`class` != 'other' " ).sample(n=5)
from helpers.s3_util import S3Util
S3Util().uploadfile(val_multi_file, "{}/".format( s3_results_prefix.rstrip("/")) )
S3Util().uploadfile(test_multi_file, "{}/".format( s3_results_prefix.rstrip("/")) )
S3Util().uploadfile(train_multi_file, "{}/".format( s3_results_prefix.rstrip("/")) )
train_multiclass.sample(n=50).to_json("sample_train_multiclass.json")
###Output
_____no_output_____ |
genport/gen_quantstats.ipynb | ###Markdown
###Code
pip install quantstats
%matplotlib inline
import quantstats as qs
from google.colab import files
import pandas as pd
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
df = pd.read_csv(fn)
df = df[['날짜', '일일수익률']]
df['날짜'] = pd.to_datetime(df['날짜'], format='%Y%m%d')
df['일일수익률'] = df['일일수익률'] / 100
df = df.set_index('날짜')
df = df.squeeze()
file_name = fn[:-4]
qs.reports.html(df, title=file_name,output=f"/content/{file_name}.html")
###Output
_____no_output_____ |
labs/working_with_data.ipynb | ###Markdown
Working with Data Part of the [Inquiryum Machine Learning Fundamentals Course](http://inquiryum.com/machine-learning/)In the examples we have been working with so far, all the columns had numerical data. For example, the violet classification data looked like: Sepal Length|Sepal Width|Petal Length|Petal Width|Class:--: | :--: |:--: |:--: |:--: 5.3|3.7|1.5|0.2|Iris-setosa5.0|3.3|1.4|0.2|Iris-setosa5.0|2.0|3.5|1.0|Iris-versicolor5.9|3.0|4.2|1.5|Iris-versicolor6.3|3.4|5.6|2.4|Iris-virginica6.4|3.1|5.5|1.8|Iris-virginicaNotice that all the feature columns had numeric data. This isn't always the case. In addition to **numeric** data, datasets often contain **categorical data**. A column that contains **categorical data** means that the values are from a limited set of values. For example:Movie | Tomato Rating | Genre | Rating | Length :---: | :---: | :---: | :---: | :---: First Man | 88 | Drama | PG-13 | 138Can You Ever Forgive Me | 98 | Drama | R | 107The Girl in the Spider's Web | 41 | Drama | R | -99Free Solo | 99 | Documentary | PG-13 | 97The Grinch | 57 | Animation | PG | 86Overlord | 80 | Action | R | 109Christopher Robin | 71 | Comedy | PG | -99Ant Man and the Wasp | 88 | Science Fiction | PG-13 | 118Numeric columns like `Tomato Rating` and `Length` are fine as is, but the columns `Genre` and `Rating` are problematic for machine learning. Those columns contain categorical data which again means that the values of those columns are from a limited set of possibilities. Modern machine learning algorithms are designed to handle only numeric and boolean (True, False) data. So, as a preprocessing step, we will need to convert the categorical columns to numeric. One solution would be simply to map each categorical value to an integer. So drama is 1, documentary 2 etc:index | genre :--: | :--: 1 | Drama 2 | Documentary 3 | Animation 4| Action 5 | Comedy 6 | Science FictionUsing this scheme we can convert the original data to:Movie | Tomato Rating | Genre | Rating | Length :---: | :---: | :---: | :---: | :---: First Man | 88 | 1 | 1 | 138Can You Ever Forgive Me | 98 | 1 | 2 | 107The Girl in the Spider's Web | 41 | 1 | 2 | -99Free Solo | 99 | 2 | 1 | 97The Grinch | 57 | 3 | 3 | 86Overlord | 80 | 4 | 2 | 109Christopher Robin | 71 | 5 | 3 | -99Ant Man and the Wasp | 88 | 6 | 1 | 118But this solution is problematic in a different way. Integers infer both an ordering and a distance where 2 is closer to 1 than 4. Since in the genre column 1 is drama, 2 is documentary, and 4 is action, our scheme implies that dramas are closer to documentaries than they are to action films, which is clearly not the case. This problem also exists in the rating column. Mapping the categories to integers in a different way will not fix this problem. No matter how clever we are in making this mapping, the problem will still exist. **So clearly this method is not the way to go**! One Hot EncodingThe solution is to do what is called one hot encoding. Our original table looked like:Movie | Tomato Rating | Genre | Rating | Length :---: | :---: | :---: | :---: | :---: First Man | 88 | Drama | PG-13 | 138Can You Ever Forgive Me | 98 | Drama | R | 107The Girl in the Spider's Web | 41 | Drama | R | -99Free Solo | 99 | Documentary | PG-13 | 97The Grinch | 57 | Animation | PG | 86Overlord | 80 | Action | R | 109Christopher Robin | 71 | Comedy | PG | -99Ant Man and the Wasp | 88 | Science Fiction | PG-13 | 118So, for example, we had the categorical column genre with the possible values drama, documentary, animation, action, comedy and science fiction. Instead of one column with those values, we are going to convert it to a form where each value is its own column.![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/normalize2.jpg)If that data instance is of that value then we would put a **one** in the column, otherwise we would put a zero. For example, since *The Girl in the Spider's Web* is a drama, we would put a 1 in the drama column and a zero in the animation column. So we would convertMovie | Genre :---: | :---: First Man | Drama Can You Ever Forgive Me | DramaThe Girl in the Spider's Web | Drama Free Solo | Documentary The Grinch | Animation Overlord | ActionChristopher Robin | ComedyAnt Man and the Wasp | Science FictiontoMovie | Drama | Documentary | Animation | Action | Comedy | Science Fiction:--: | :--: | :--: | :--: | :--: | :--: | :--: First Man | 1 | 0 | 0| 0| 0 | 0 Can You Ever Forgive Me | 1 | 0 | 0| 0| 0 | 0 The Girl in the Spider's Web | 1 | 0 | 0| 0| 0 | 0 Free Solo | 0 | 1 | 0| 0| 0 | 0 The Grinch | 0 | 0 | 1| 0| 0 | 0 Overlord | 0 | 0 | 0| 1| 0 | 0 Christopher Robin | 0 | 0 | 0| 0| 1 | 0 Ant Man and the Wasp | 0 | 0 | 0| 0| 0 | 1 Notice that the movie *First Man* has a one in the drama column and zeroes elsewhere. The movie *Free Solo* has a one in the documentary column and zeroes elsewhere.This is the prefered way of converting categorical data (when we work with text we will see other options). An added benefit to this approach is now an instance can be of multiple categories. For example, we may want to categorize *Ant Man and the Wasp* as both a comedy and science fiction, and that is easy to do in this scheme:Movie | Drama | Documentary | Animation | Action | Comedy | Science Fiction:--: | :--: | :--: | :--: | :--: | :--: | :--: First Man | 1 | 0 | 0| 0| 0 | 0 Can You Ever Forgive Me | 1 | 0 | 0| 0| 0 | 0 The Girl in the Spider's Web | 1 | 0 | 0| 0| 0 | 0 Free Solo | 0 | 1 | 0| 0| 0 | 0 The Grinch | 0 | 0 | 1| 0| 0 | 0 Overlord | 0 | 0 | 0| 1| 0 | 0 Christopher Robin | 0 | 0 | 0| 0| 1 | 0 Ant Man and the Wasp | 0 | 0 | 0| 0| 1 | 1 If we one-hot encoded all the categorical columns in our original dataset it would look like:Movie | Tomato Rating | Action | Animation | Comedy | Documentary | Drama | Science Fiction | PG | PG-13 | R | Length :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: First Man | 88 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0| 138Can You Ever Forgive Me | 98 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1| 107The Girl in the Spider's Web | 41 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0| -99Free Solo | 99 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0| 97The Grinch | 57 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0| 86Overlord | 80 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0| 109Christopher Robin | 71 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0| -99Ant Man and the Wasp | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0| 118 CodingLet's investigate this a bit with a coding example.
###Code
import pandas as pd
bike = pd.read_csv('https://raw.githubusercontent.com/zacharski/ml-class/master/data/bike.csv')
bike = bike.set_index('Day')
bike
###Output
_____no_output_____
###Markdown
Here we are trying to predict whether someone will mountain bike or not based on the outlook, temperature, humidity, and wind. Let's forge ahead and see if we can build a decision tree classifier:
###Code
from sklearn import tree
clf = tree.DecisionTreeClassifier(criterion='entropy')
clf.fit(bike[['Outlook', 'Temperature', 'Humidity', 'Wind']], bike['Bike'])
###Output
_____no_output_____
###Markdown
And we see that doesn't work. We get the error:```ValueError: could not convert string to float: 'Sunny'```We need to one-hot encode these categorical columns. Here is how to convert the Outlook column. The steps are1. Create a new Dataframe of the one-hot encoded values for the Outlook column.2. Drop the Outlook column from the original Dataframe.3. Join the new one-hot encoded Dataframe to the original. 1. Create the new Dataframe
###Code
one_hot = pd.get_dummies(bike['Outlook'])
one_hot
###Output
_____no_output_____
###Markdown
Nice. 2. Drop the outlook column from the original Dataframe:
###Code
bike = bike.drop('Outlook', axis=1)
###Output
_____no_output_____
###Markdown
3. join the one-hot encoded Dataframe to the original
###Code
bike = bike.join(one_hot)
bike
###Output
_____no_output_____
###Markdown
It is simple, but a little tedious. Let's finish up encoding the other columns:
###Code
one_hot = pd.get_dummies(bike['Temperature'])
bike = bike.drop('Temperature', axis=1)
bike = bike.join(one_hot)
one_hot = pd.get_dummies(bike['Humidity'])
bike = bike.drop('Humidity', axis=1)
bike = bike.join(one_hot)
one_hot = pd.get_dummies(bike['Wind'])
bike = bike.drop('Wind', axis=1)
bike = bike.join(one_hot)
bike
###Output
_____no_output_____
###Markdown
Great! Now we can train our classifier. I will just cut and paste the previous `clf.fit` and ...
###Code
clf.fit(bike[['Outlook', 'Temperature', 'Humidity', 'Wind']], bike['Bike'])
###Output
_____no_output_____
###Markdown
Well that didn't work. The clf.fit instruction was```clf.fit(bike[['Outlook', 'Temperature', 'Humidity', 'Wind']], bike['Bike'])```So we instruct it to use the Outlook, Temperature, Humidity, and Wind columns, but we just deleted them. Instead we have the following columns:
###Code
list(bike.columns)
###Output
_____no_output_____
###Markdown
Using that list let's divide up our data into the label (what we are trying to predict) and the features (what we are using to make the prediction).
###Code
fColumns = list(bike.columns)
fColumns.remove('Bike')
bike_features = bike[fColumns]
bike_features
###Output
_____no_output_____
###Markdown
and now the label:
###Code
bike_labels = bike[['Bike']]
bike_labels
###Output
_____no_output_____
###Markdown
Now, finally, we can train our decision tree classifier.
###Code
clf.fit(bike_features, bike_labels)
###Output
_____no_output_____
###Markdown
As you can see, preparing the data, can actually take a longer time than running the machine learning component. `get_dummies` not the only wayThere are other methods to one-hot encode a dataset. For example, sklearn has a class, [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html?highlight=one%20hot%20encorder), which might be a better option for many machine learning tasks. The reason I selected `get_dummies` for this notebook was a pedagogical one---`get_dummies` is a bit more transparent and you get a better sense of what one-hot-encoding does. Conditionals for munging dataLet's say we have this small DataFrame
###Code
from pandas import DataFrame, Series
students = DataFrame({'name': ['Ann', 'Ben', 'Clara', 'Danielle', 'Eric', 'Akash'],
'sex': ['f', 'm', 'f', 'f', 'm', 'm'],
'age': [21, 18, 23, 19, 20, 21]})
students
###Output
_____no_output_____
###Markdown
The column sex is categorical so we need to convert it. We could * one-hot-encode it and have two columns: f and m. * one-hot-encode it, have two columns: f and m, and then delete one of those columns* create a column female and populate it correctly using a conditionalAll three are fine options with the last 2 slightly better since they reduce the dimensionality. Let's see how we can do the last one using a lambda expression:
###Code
students['female'] = students['sex'].apply(lambda x: True if x == 'f' else False)
students = students.drop('sex', axis=1)
students
###Output
_____no_output_____
###Markdown
That's great! Now suppose we think that whether or not a person is under 20 is relevant for our machine learning task. We can use the same type of lambda expression to create this new column:
###Code
students['under20'] = students['age'].apply(lambda x: True if x < 20 else False)
students
###Output
_____no_output_____
###Markdown
As you can see, working with machine learning involves working on a pipeline of various processes--we don't start with the machine learning algorithm. Before leaping into the ML algorithm, takes some time to explore the data, decide if it needs to be cleaned in any way, one-hot-encoded, if some features are not needed or if new ones need to be added. Hyperparameters.When we train a machine learning model (using `fit` in this case), the model learns a set of **parameters**. For example, in decision trees, one parameter is the depth of the tree. The depth isn't determined until the `fit` method finishes. The important point is that parameters are what the model learns on its own from analyzing the training dataset and not something we adjust.In contrast **hyperparameters** are things we determine and not determined by the algorithm. Hyperparameters are set before the model looks at the training data--in our case before `fit`. For decision trees there are a number of these hyperparameters. We already saw two: `max_depth` controls the size of the tree and `criterion`. Adjusting one hyperparameter may improve the accuracy of your classifier or it may worsen it. We have already learned that we shouldn't test our model using the same data that we trained on. Why not? Because the model is already tuned to the specific instances in our training data. In a kNN classifier, it may memorize every instance in our training data--*Gabby Douglas who is 49 inches tall and weighs 90 pounds is a gymnast*. If we test using that same data, the accuracy will tend to be higher than if we tested using data the classifier has never seen before. Again, if we told the algorithm someone who is 49 inches tall and 90 pounds is a gymnast, we shouldn't find it surprising that if we asked it what sport does someone play who is 49 inches and 90 pounds, and the algorithm predicts *gymnast*. We want to see if the algorithm learned or generalized something from processing the dataset. In some previous labs, we reserved 20% of the original data to test on and used 80% for training. Now let's imagine a process where we will adjust hyperparameters to improve the accuracy of our model. So we build a classifier with one setting of the hyperparameters and build another with a different setting for the hyperparameters and see which one is more accurate. One approach might be:1. Use 80% of the data to train on.2. Test the classifier using the 20% test set and get the accuracy.3. Adjust a hyperparameter and create a new classifier4. Use the same 80% of the data to train the new classifier5. Test the classifier using the 20% test set and get the accuracy.6. Keep repeating this to find the value of the hyperparameter that performs the best.7. The accuracy of your classifier will be the highest one obtained from evaluating the 20% test set.The problem with this approach is that since we are tuning the hyperparameters based on the accuracy on the test set, some of the information about the test set is leaking into our classifier. Let me explain about information leaking into the classifier.Let's look at our example of categorizing athletes into one of three categories: gymnast, basketball player, and marathoner. LeilaniMitchell is not in the training set but is in the test set. She is 5 foot 5 inches tall and weighs 138. Initially, she was among the instances in the test set that were misclassified. We kept adjusting the hyperparameters until we improved accuracy and now she is correctly classified as a basketball player. So we tuned our classifier to work well with her and others in the test set. That is what we mean by information from the test set leaking into the classifier.So again, we may get an arbitrary higher accuracy that is not reflective of the algorithm's performance on unseen data.**So what can we do?**The solution is to divide the original dataset into three:1. the training set which we use to train our model.2. the validation set which we use to test our model so we can adjust the hyperparameters3. the test set which we use to perform an evaluation of the final model fit on the training set. We make our final adjustment of the hyperparameters **before** we evaluate the model using the test set.There are many ways to divide up the original data into these three sets. For example, maybe 20% is reserved for the test set, 20% for validation and 60% for training. However, there is a slightly better way. Cross ValidationFor cross validation we are going to divide the dataset (typically just the training dataset) into roughly equal sized buckets. Typically we would divide the data into 10 parts and this is called 10-fold cross validation. To reiterate, with this method we have one data set which we divide randomly into 10 parts. We use 9 ofthose parts for training and reserve one tenth for validation. We repeat this procedure 10 timeseach time reserving a different tenth for validation.Let’s look at an example. Suppose we want to build a classifier that just answers yes or no tothe question *Is this person a professional basketball player?* And our data consists of informationabout 500 basketball players and 500 non-basketball players. Step 1. Divide the data into 10 bucks.![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/buckets.png)We put 50 basketball players and 50 non-players in each bucket so each bucket contains information on 100 individuals. Step 2. We iterate through the following steps 10 times1. During each iteration hold back one of the buckets. For iteration 1, we will hold back bucket 1, iteration 2, bucket 2, and so on.2. We will train the classifier with data from the other buckets. (during the first iteration we will train with the data in buckets 2 through 10)3. We will validate the classifier we just built using data from the bucket we held back and save the results. In our case these results might be: 35 of the basketball players were classified correctly and 29 of the non basketball players were classified correctly. Step 3. we sum up the results.Once we finish the ten iterations we sum the results. Perhaps we find that 937 of the 1,000 individuals were categorized correctly. SummaryUsing cross-validation, every instance in our data is used in training and, in a different iteration, in validation. This results in a less biased model. By **bias** we mean that the algorithm is less accurate due to it not taking into account all relevant information in the data. With cross-validation we typically train on a larger percentage of the data than we would if we set aside a fixed validation set. One small disadvantage is that it now take 10 times as long to run. Leave One OutHere is a suggestion from Lucy:![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/nfold.png)In the machine learning literature, n-fold cross validation (where n is the number of samplesin our data set) is called leave-one-out. Lucy above, already mentioned one benefit of leave-one-out—at every iteration we are using the largest possible amount of our data for training. The otherbenefit is that it is deterministic. What do we mean by ‘deterministic’?Suppose Lucy spends an intense 80 hour week creating and coding a new classifier. It isFriday and she is exhausted so she asks two of her colleagues (Emily and Li) to evaluate theclassifier over the weekend. She gives each of them the classifier and the same dataset andasks them to use 10-fold cross validation. On Monday she asks for the results ..![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/nfoldwomen2.png)Hmm. They did not get the same results. Did Emily or Li make a mistake? Not necessarily. In10-fold cross validation we place the data randomly into 10 buckets. Since there is thisrandom element, it is likely that Emily and Li did not divide the data into buckets in exactlythe same way. In fact, it is highly unlikely that they did. So when they train the classifier, theyare not using exactly the same data and when they test this classifier they are using differenttest sets. So it is quite logical that they would get different results. This result has nothing todo with the fact that two different people were performing the evaluation. If Lucy herself ran10-fold cross validation twice, she too would get slightly different results. The reason we getdifferent results is that there is a random component to placing the data into buckets. So 10-fold cross validation is called non-deterministic because when we run the test again we arenot guaranteed to get the same result. In contrast, the leave-one-out method is deterministic.Every time we use leave-one-out on the same classifier and the same data we will get thesame result. That is a good thing! The disadvantages of leave-one-outThe main disadvantage of leave-one-out is the computational expense of the method.Consider a modest-sized dataset of 10,000 instances and that it takes one minute to train aclassifier. For 10-fold cross validation we will spend 10 minutes in training. In leave-one-outwe will spend 16 hours in training. If our dataset contains 10 million entries the total timespent in training would nearly be two years. Eeeks!![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/twoyears.png)The other disadvantage of leave-one-out is related to stratification. Stratification.Let us return to the example of building a classifier that predictswhat sport a woman plays (basketball, gymnastics, or track). When training the classifier wewant the training data to be representative and contain data from all three classes. Supposewe assign data to the training set in a completely random way. It is possible that nobasketball players would be included in the training set and because of this, the resultingclassifier would not be very good at classifying basketball players. Or consider creating a dataset of 100 athletes. First we go to the Women’s NBA website and write down the info on 33basketball players; next we go to Wikipedia and get 33 women who competed in gymnasticsat the 2012 Olympics and write that down; finally, we go again to Wikipedia to getinformation on women who competed in track at the Olympics and record data for 34 people.So our dataset looks like this:![](https://raw.githubusercontent.com/zacharski/ml-class/master/labs/pics/womensports.png)Let’s say we are doing 10-fold cross validation. We start at the beginning of the list and putevery ten people in a different bucket. In this case we have 10 basketball players in both thefirst and second buckets. The third bucket has both basketball players and gymnasts. Thefourth and fifth buckets solely contain gymnasts and so on. None of our buckets arerepresentative of the dataset as a whole and you would be correct in thinking this would skewour results. The preferred method of assigning instances to buckets is to make sure that theclasses (basketball players, gymnasts, marathoners) are represented in the same proportionsas they are in the complete dataset. Since one-third of the complete dataset consists ofbasketball players, one-third of the entries in each bucket should also be basketball players.And one-third the entries should be gymnasts and one-third marathoners. This is calledstratification and this is a good thing. The problem with the leave-one-out evaluationmethod is that necessarily all the test sets are non-stratified since they contain only oneinstance. In sum, while leave-one-out may be appropriate for very small datasets, 10-foldcross validation is by far the most popular choice. CodingLet's see how we can use cross validation using the Iris dataset.First, let's load the dataset:
###Code
iris = pd.read_csv('https://raw.githubusercontent.com/zacharski/ml-class/master/data/iris.csv')
iris
###Output
_____no_output_____
###Markdown
Now let's divide this into a training set and a test set using an 80-20 split.
###Code
from sklearn.model_selection import train_test_split
iris_train, iris_test = train_test_split(iris, test_size = 0.2)
iris_train
###Output
_____no_output_____
###Markdown
10 fold cross validation on iris_trainFirst, to make things as clear as possible, we will split the iris_train dataset into the features and the labels:
###Code
iris_train_features = iris_train[['Sepal Length', 'Sepal Width', 'Petal Length', 'Petal Width']]
iris_train_labels = iris_train[['Class']]
###Output
_____no_output_____
###Markdown
Let's also create an instance of a decision tree classifier
###Code
from sklearn import tree
clf = tree.DecisionTreeClassifier(criterion='entropy')
###Output
_____no_output_____
###Markdown
The cross validation steps Step 1. Import cross_val_score
###Code
from sklearn.model_selection import cross_val_score
###Output
_____no_output_____
###Markdown
Step 2. run cross validation
###Code
scores = cross_val_score(clf, iris_train_features, iris_train_labels, cv=10)
###Output
_____no_output_____
###Markdown
`cv=10` specified that we perform 10-fold cross validation. the function returns a 10 element array, where each element is the accuracy of that fold. Let's take a look:
###Code
print(scores)
print("The average accuracy is %5.3f" % (scores.mean()))
###Output
[1. 0.83333333 0.91666667 1. 0.75 0.91666667
1. 0.91666667 1. 1. ]
The average accuracy is 0.933
###Markdown
So `scores` contains the accuracy for each of the 10 runs. In my case it was:```[1. 0.83333333 1. 0.91666667 0.91666667 0.91666667 0.91666667 0.91666667 1. 0.91666667]The average accuracy is 0.933```So the best runs were 100% accurate and the worst was 83%. The average accuracy was 93% You tryWe have covered a lot of material and now is your chance to practice it using the Pima Indians Diabetes Data we used before. The data file is at [https://raw.githubusercontent.com/zacharski/ml-class/master/data/pima-indians-diabetes.csv](https://raw.githubusercontent.com/zacharski/ml-class/master/data/pima-indians-diabetes.csv)The data file does not contain a header row. Of course you can name the columns whatever you want, but I used:```['pregnant', 'glucose', 'bp', 'skinfold', 'insulin', 'bmi', 'pedigree', 'age', 'diabetes']``` Load in the data fileSo load in the data file and let's reserve 20% for `pima_test` and 80% for `pima_train`.
###Code
# TO DO
pima = pd.read_csv('https://raw.githubusercontent.com/zacharski/ml-class/master/data/pima-indians-diabetes.csv', names=['pregnant', 'glucose', 'bp', 'skinfold', 'insulin', 'bmi', 'pedigree', 'age', 'diabetes'])
pima_train, pima_test = train_test_split(pima, test_size = 0.2)
pima_train
###Output
_____no_output_____
###Markdown
creating separate data structures for the features and labelsNext, for convenience let's create 2 DataFrames and 2 Series. The DataFrames are:* `pima_train_features` will contain the feature columns from `pima_train` * `pima_test_features` will contain the feature columns from `pima_test`The Series are:* `pima_train_labels` will contain the `diabetes` column* `pima_test_labels` will also contain the `diabetes` column
###Code
import numpy as np
# TO DO
pima_train_features = pima_train.drop(['diabetes'], axis=1)
pima_train_labels = pima_train['diabetes']
pima_test_features = pima_test.drop(['diabetes'], axis=1)
pima_test_labels = pima_test['diabetes']
# print(type(pima_test_labels))
###Output
_____no_output_____
###Markdown
Exploring hyperparameters: max_depthWe are interested in seeing which has higher accuracy:1. a classifier unconstrained for max_depth 2. a classifier with max_depth of 4Create 2 decision tree classifiers: `clf` which is unconstrained for depth and `clf4` which has a max_depth of 4.
###Code
# TO DO
clf = tree.DecisionTreeClassifier(criterion='entropy')
clf4 = tree.DecisionTreeClassifier(criterion='entropy', max_depth=4)
###Output
_____no_output_____
###Markdown
using 10-fold cross validation get the average accuracy of `clf`
###Code
# TO DO
clf_scores = cross_val_score(clf, pima_train_features, pima_train_labels, cv=10)
print(clf_scores)
print("The average accuracy is %5.3f" % (clf_scores.mean()))
###Output
[0.64516129 0.61290323 0.64516129 0.69354839 0.70491803 0.75409836
0.7704918 0.70491803 0.78688525 0.7704918 ]
The average accuracy is 0.709
###Markdown
using 10-fold cross validation get the average accuracy of `clf4`
###Code
# TO DO
clf4_scores = cross_val_score(clf4, pima_train_features, pima_train_labels, cv=10)
print(clf4_scores)
print("The average accuracy is %5.3f" % (clf4_scores.mean()))
print("clf with max_depth 4 has better accuracy")
###Output
[0.70967742 0.67741935 0.80645161 0.66129032 0.68852459 0.67213115
0.73770492 0.68852459 0.83606557 0.72131148]
The average accuracy is 0.720
clf with max_depth 4 has better accuracy
###Markdown
which has better accuracy, the one unconstrained for depth or the one whose max_depth is 4? Using the entire training set, train a new classifier with the best setting for the max_depth hyperparameter
###Code
# TO DO
from sklearn.metrics import accuracy_score
clf_max = tree.DecisionTreeClassifier(criterion='entropy', max_depth=5)
clf_max.fit(pima_train_features, pima_train_labels)
# clf_max = tree.DecisionTreeClassifier(criterion='entropy', max_depth=6)
# clf_max_scores = cross_val_score(clf_max, pima_train_features, pima_train_labels)
# print(clf_max_scores)
# print("The average accuracy is %5.3f" % (clf_max_scores.mean()))
###Output
_____no_output_____
###Markdown
Finally, using the test set what is the accuracy?
###Code
# TO DO
predictions = clf_max.predict(pima_test_features)
accuracy_score(pima_test_labels, predictions)
###Output
_____no_output_____
###Markdown
AutomationLet's say we want to find the best settings for `max_depth`and we will check out the values, 3, 4, 5, 6, ...12 and the best for `min_samples_split` and we will try 2, 3, 4, 5. That makes 10 values for `max_depth` and 4 for `min_samples_split`. That makes 40 different classifiers and it would be time consuming to do that by hand. Fortunately, we can automate the process using [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html?highlight=gridsearchcvsklearn.model_selection.GridSearchCV). First we will import the module:
###Code
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Now we are going to specify the values we want to test. For `max_depth` we want 3, 4, 5, 6, ... 12 and for `min_samples_split` we want 2, 3, 4, 5:
###Code
hyperparam_grid = [
{'max_depth': [3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'min_samples_split': [2,3,4, 5]}
]
###Output
_____no_output_____
###Markdown
Next, let's create a decision tree classifier:
###Code
clf = tree.DecisionTreeClassifier(criterion='entropy')
###Output
_____no_output_____
###Markdown
now create a grid search object
###Code
grid_search = GridSearchCV(clf, hyperparam_grid, cv=10)
###Output
_____no_output_____
###Markdown
When we create the object we pass in:* the classifer - in our case `clf`* the Python dictionary containing the hyperparameters we want to evaluate. In our case `hyperparam_grid`* how many bins we are using. In our case 10: `cv=10` now perform `fit`
###Code
grid_search.fit(pima_train_features, pima_train_labels)
###Output
_____no_output_____
###Markdown
When `grid_search` runs, it creates 40 different classifiers and runs 10-fold cross validation on each of them. We can ask `grid_search` what were the parameters of the classifier with the highest accuracy:
###Code
grid_search.best_params_
###Output
_____no_output_____
###Markdown
We can also ask `grid_search` to return the best classifier so we can use it to make predictions.
###Code
predictions = grid_search.best_estimator_.predict(pima_test_features)
from sklearn.metrics import accuracy_score
accuracy_score(pima_test_labels, predictions)
###Output
_____no_output_____ |
Traffic_Sign_Classifier-Copy1.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = ?
validation_file=?
testing_file = ?
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = ?
# TODO: Number of validation examples
n_validation = ?
# TODO: Number of testing examples.
n_test = ?
# TODO: What's the shape of an traffic sign image?
image_shape = ?
# TODO: How many unique classes/labels there are in the dataset.
n_classes = ?
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
_____no_output_____
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Project: Build a Traffic Sign Recognition Classifier --- Step 0: Load The Data
###Code
import pickle
import os
import pandas as pd
import numpy as np
import tensorflow as tf
training_file = '/home/devesh/Downloads/traffic-signs-data/train.p'
validation_file= '/home/devesh/Downloads/traffic-signs-data/valid.p'
testing_file = '/home/devesh/Downloads/traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
X_train.shape
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_test.shape[2]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
# show image of 10 random data points
fig, axs = plt.subplots(2,5, figsize=(15, 6))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
for i in range(10):
index = random.randint(0, len(X_train))
image = X_train[index]
axs[i].axis('off')
axs[i].imshow(image.squeeze())
axs[i].set_title(y_train[index])
fig.savefig('/home/devesh/1.png')
fig2=plt.hist(y_train, bins = n_classes)
total_n_train = len(X_train)
print("Total number of training examples =", total_n_train)
plt.savefig('/home/devesh/2.png')
import cv2
# Grayscales an image
def grayscale(img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return img
def normalize(data):
return data / 255 * 0.8 + 0.1
def preprocess(data):
gray_images = []
for image in data:
gray = grayscale(image)
gray_images.append(gray)
return np.array(gray_images)
from numpy import newaxis
print('Preprocessing training data...')
# Iterate through grayscale
X_train = preprocess(X_train)
X_train = X_train[..., newaxis]
# Normalize
X_train = normalize(X_train)
print('Finished preprocessing training data.')
# Double-check that the image is changed to depth of 1
image_shape2 = X_train.shape
print("Processed training data shape =", image_shape2)
print('Preprocessing testing data...')
# Iterate through grayscale
X_test = preprocess(X_test)
X_test = X_test[..., newaxis]
# Normalize
X_test = normalize(X_test)
print('Finished preprocessing testing data.')
# Double-check that the image is changed to depth of 1
image_shape3 = X_test.shape
print("Processed testing data shape =", image_shape3)
print('All data preprocessing complete.')
# Generate additional data
from scipy import ndimage
import random
# min_desired below is just mean_pics but wanted to make the code below easier to distinguish
pics_in_class = np.bincount(y_train)
mean_pics = int(np.mean(pics_in_class))
min_desired = int(mean_pics)
print('Generating new data.')
# Angles to be used to rotate images in additional data made
angles = [-10, 10, -15, 15, -20, 20]
# Iterate through each class
for i in range(len(pics_in_class)):
# Check if less data than the mean
if pics_in_class[i] < min_desired:
# Count how many additional pictures we want
new_wanted = min_desired - pics_in_class[i]
picture = np.where(y_train == i)
more_X = []
more_y = []
# Make the number of additional pictures needed to arrive at the mean
for num in range(new_wanted):
# Rotate images and append new ones to more_X, append the class to more_y
more_X.append(ndimage.rotate(X_train[picture][random.randint(0,pics_in_class[i] - 1)], random.choice(angles), reshape=False))
more_y.append(i)
# Append the pictures generated for each class back to the original data
X_train = np.append(X_train, np.array(more_X), axis=0)
y_train = np.append(y_train, np.array(more_y), axis=0)
print('Additional data generated. Any classes lacking data now have', min_desired, 'pictures.')
# Splitting the training dataset into training and validation data
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# Shuffle the data prior to splitting
X_train, y_train = shuffle(X_train, y_train)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, stratify = y_train, test_size=0.1, random_state=23)
print('Dataset successfully split for training and validation.')
import tensorflow as tf
tf.reset_default_graph()
EPOCHS = 10
BATCH_SIZE = 150
from tensorflow.contrib.layers import flatten
def myLeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
# Weight and bias
# If not using grayscale, the third number in shape would be 3
c1_weight = tf.Variable(tf.truncated_normal(shape = (5, 5, 1, 6), mean = mu, stddev = sigma))
c1_bias = tf.Variable(tf.zeros(6))
# Apply convolution
conv_layer1 = tf.nn.conv2d(x, c1_weight, strides=[1, 1, 1, 1], padding='VALID') + c1_bias
# Activation for layer 1
conv_layer1 = tf.nn.relu(conv_layer1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv_layer1 = tf.nn.avg_pool(conv_layer1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
# Note: The second layer is implemented the exact same as layer one, with layer 1 as input instead of x
# And then of course changing the numbers to fit the desired ouput of 10x10x16
# Weight and bias
c2_weight = tf.Variable(tf.truncated_normal(shape = (5, 5, 6, 16), mean = mu, stddev = sigma))
c2_bias = tf.Variable(tf.zeros(16))
# Apply convolution for layer 2
conv_layer2 = tf.nn.conv2d(conv_layer1, c2_weight, strides=[1, 1, 1, 1], padding='VALID') + c2_bias
# Activation for layer 2
conv_layer2 = tf.nn.relu(conv_layer2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv_layer2 = tf.nn.avg_pool(conv_layer2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten to get to fully connected layers. Input = 5x5x16. Output = 400.
flat = tf.contrib.layers.flatten(conv_layer2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_weight = tf.Variable(tf.truncated_normal(shape = (400, 200), mean = mu, stddev = sigma))
fc1_bias = tf.Variable(tf.zeros(200))
# Here is the main change versus a convolutional layer - matrix multiplication instead of 2D convolution
fc1 = tf.matmul(flat, fc1_weight) + fc1_bias
# Activation for the first fully connected layer.
# Same thing as before
fc1 = tf.nn.relu(fc1)
# Dropout, to prevent overfitting
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 4: Fully Connected. Input = 120. Output = 84.
# Same as the fc1 layer, just with updated output numbers
fc2_weight = tf.Variable(tf.truncated_normal(shape = (200, 100), mean = mu, stddev = sigma))
fc2_bias = tf.Variable(tf.zeros(100))
# Again, matrix multiplication
fc2 = tf.matmul(fc1, fc2_weight) + fc2_bias
# Activation.
fc2 = tf.nn.relu(fc2)
# Dropout
fc2 = tf.nn.dropout(fc2, keep_prob)
# Layer 5 Fully Connected. Input = 84. Output = 43.
# Since this is the final layer, output needs to match up with the number of classes
fc3_weight = tf.Variable(tf.truncated_normal(shape = (100, 43), mean = mu, stddev = sigma))
fc3_bias = tf.Variable(tf.zeros(43))
# Again, matrix multiplication
logits = tf.matmul(fc2, fc3_weight) + fc3_bias
return logits
# Set placeholder variables for x, y, and the keep_prob for dropout
# Also, one-hot encode y
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, 43)
rate = 0.005
logits = myLeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
# The below is used in the validation part of the neural network
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob : 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
loss = sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob : 0.7})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
# Launch the model on the test data
with tf.Session() as sess:
saver.restore(sess, './lenet')
test_accuracy = sess.run(accuracy_operation, feed_dict={x: X_test, y: y_test, keep_prob : 1.0})
print('Test Accuracy: {}'.format(test_accuracy))
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
add_pics = os.listdir("/home/devesh/udac_nd/CarND-Traffic-Sign-Classifier-Project/test/")
print(add_pics)
# Show the images, add to a list to process for classifying
add_pics_data = []
for i in add_pics:
i = '/home/devesh/udac_nd/CarND-Traffic-Sign-Classifier-Project/test/' + i
image = mpimg.imread(i)
add_pics_data.append(image)
plt.imshow(image)
plt.show()
# Make into numpy array for processing
add_pics_data = np.array(add_pics_data)
# First, double-check the image shape to make sure it matches the original data's 32x32x3 size
print(add_pics_data.shape)
add_pics_data = preprocess(add_pics_data)
add_pics_data = add_pics_data[..., newaxis]
# Normalize
add_pics_data = normalize(add_pics_data)
print('Finished preprocessing additional pictures.')
new_image_shape = add_pics_data.shape
print("Processed additional pictures shape =", new_image_shape)
with tf.Session() as sess:
saver.restore(sess, './lenet')
new_pics_classes = sess.run(logits, feed_dict={x: add_pics_data, keep_prob : 1.0})
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
predicts = sess.run(tf.nn.top_k(new_pics_classes, k=5, sorted=True))
for i in range(len(predicts[0])):
print('Image', i, 'probabilities:', predicts[0][i], '\n and predicted classes:', predicts[1][i])
###Output
Image 0 probabilities: [10.616723 3.8455713 2.669806 2.2186196 0.87587404]
and predicted classes: [14 0 34 38 15]
Image 1 probabilities: [16.987305 8.380401 4.820599 2.2235348 0.91358745]
and predicted classes: [38 34 13 14 15]
Image 2 probabilities: [ 5.0054965 4.6089253 1.2927918 -0.76526076 -1.203482 ]
and predicted classes: [10 9 3 5 35]
Image 3 probabilities: [27.546165 15.932422 2.9358974 2.172908 -0.5163945]
and predicted classes: [13 38 15 2 35]
Image 4 probabilities: [8.483552 8.009216 2.5893536 2.302762 2.0210102]
and predicted classes: [29 18 40 37 12]
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "../data/train.p"
validation_file= "../data/valid.p"
testing_file = "../data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
#n_classes = 43
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation examples =", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Number of validation examples = 4410
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
import random
import numpy as np
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_train))
#for index in range(0, n_train):
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap = "gray")
print(y_train[index])
print(index)
#index - index + 1
###Output
2
33432
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
#preprocess data - shuffle
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
import tensorflow as tf
EPOCHS = 50
BATCH_SIZE = 128
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Layer 2: Convolutional. Input = 28x28x6. Output = 14x14x10.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 10), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(10))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 2, 2, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Layer 3: Convolutional. Input = 14x14x10. Output = 8x8x16.
conv3_W = tf.Variable(tf.truncated_normal(shape=(5,5,10,16),mean=mu,stddev=sigma))
conv3_b =tf.Variable(tf.zeros(16))
conv3 = tf.nn.conv2d(conv2,conv3_W,strides=[1,1,1,1],padding='VALID') + conv3_b
# Activation.
conv3 = tf.nn.relu(conv3)
# Pooling. Input = 8x8x16. Output = 4x4x16.
conv3= tf.nn.max_pool(conv3,ksize=[1,2,2,1],strides=[1,2,2,1],padding='VALID')
# Flatten. Input = 4x4x16. Output = 256.
f= flatten(conv3)
# Layer 4: Fully Connected. Input = 256. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(256, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(f, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Introduce Dropout after first fully connected layer
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 5: Fully Connected. Input = 120. Output = 100.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 100), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(100))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 6: Fully Connected. Input = 100. Output = 84.
fc3_W = tf.Variable(tf.truncated_normal(shape=(100, 84), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(84))
fc3 = tf.matmul(fc2, fc3_W) + fc3_b
# Activation.
fc3= tf.nn.relu(fc3)
# Layer 7: Fully Connected. Input = 84. Output = 43.
fc4_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc4_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc3, fc4_W) + fc4_b
return logits
w = tf.placeholder(tf.float32, (None, 32, 32, 3))
b = tf.placeholder(tf.int32, (None))
# one hot encoding for output labels
one_hot_y = tf.one_hot(b, n_classes)
# defining the dropout probability after fully connected layer in the architecture
keep_prob = tf.placeholder(tf.float32)
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Train Pipeline
rate = 0.001
logits = LeNet(w)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
#model evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={w: batch_x, b: batch_y,keep_prob:1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#train the model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={w: batch_x, b: batch_y,keep_prob:0.5})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
###Output
Training...
EPOCH 1 ...
Validation Accuracy = 0.581
EPOCH 2 ...
Validation Accuracy = 0.773
EPOCH 3 ...
Validation Accuracy = 0.859
EPOCH 4 ...
Validation Accuracy = 0.902
EPOCH 5 ...
Validation Accuracy = 0.921
EPOCH 6 ...
Validation Accuracy = 0.924
EPOCH 7 ...
Validation Accuracy = 0.933
EPOCH 8 ...
Validation Accuracy = 0.933
EPOCH 9 ...
Validation Accuracy = 0.946
EPOCH 10 ...
Validation Accuracy = 0.944
EPOCH 11 ...
Validation Accuracy = 0.950
EPOCH 12 ...
Validation Accuracy = 0.943
EPOCH 13 ...
Validation Accuracy = 0.947
EPOCH 14 ...
Validation Accuracy = 0.953
EPOCH 15 ...
Validation Accuracy = 0.953
EPOCH 16 ...
Validation Accuracy = 0.954
EPOCH 17 ...
Validation Accuracy = 0.962
EPOCH 18 ...
Validation Accuracy = 0.958
EPOCH 19 ...
Validation Accuracy = 0.963
EPOCH 20 ...
Validation Accuracy = 0.960
EPOCH 21 ...
Validation Accuracy = 0.959
EPOCH 22 ...
Validation Accuracy = 0.959
EPOCH 23 ...
Validation Accuracy = 0.959
EPOCH 24 ...
Validation Accuracy = 0.961
EPOCH 25 ...
Validation Accuracy = 0.956
EPOCH 26 ...
Validation Accuracy = 0.964
EPOCH 27 ...
Validation Accuracy = 0.965
EPOCH 28 ...
Validation Accuracy = 0.965
EPOCH 29 ...
Validation Accuracy = 0.961
EPOCH 30 ...
Validation Accuracy = 0.964
EPOCH 31 ...
Validation Accuracy = 0.966
EPOCH 32 ...
Validation Accuracy = 0.963
EPOCH 33 ...
Validation Accuracy = 0.959
EPOCH 34 ...
Validation Accuracy = 0.966
EPOCH 35 ...
Validation Accuracy = 0.966
EPOCH 36 ...
Validation Accuracy = 0.963
EPOCH 37 ...
Validation Accuracy = 0.959
EPOCH 38 ...
Validation Accuracy = 0.967
EPOCH 39 ...
Validation Accuracy = 0.965
EPOCH 40 ...
Validation Accuracy = 0.966
EPOCH 41 ...
Validation Accuracy = 0.964
EPOCH 42 ...
Validation Accuracy = 0.966
EPOCH 43 ...
Validation Accuracy = 0.967
EPOCH 44 ...
Validation Accuracy = 0.963
EPOCH 45 ...
Validation Accuracy = 0.968
EPOCH 46 ...
Validation Accuracy = 0.966
EPOCH 47 ...
Validation Accuracy = 0.968
EPOCH 48 ...
Validation Accuracy = 0.966
EPOCH 49 ...
Validation Accuracy = 0.960
EPOCH 50 ...
Validation Accuracy = 0.968
Model saved
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import glob
import numpy as np
new_images = []
image1 = mpimg.imread('GermanTrafficSigns/aheadonly35.jpg')
plt.figure()
image1 = np.asarray(image1)
images1_gry = np.sum(image1/3, axis=3, keepdims=True)
plt.imshow(images1_gry)
new_images.append(images1_gry)
image2 = mpimg.imread('GermanTrafficSigns/bicycle29.jpg')
plt.figure()
plt.imshow(image2)
new_images.append(image2/3)
image3 = mpimg.imread('GermanTrafficSigns/noentry17.jpg')
plt.figure()
plt.imshow(image3)
new_images.append(image3/3)
image4 = mpimg.imread('GermanTrafficSigns/pedestrian22.jpg')
plt.figure()
plt.imshow(image4)
new_images.append(image4/3)
image5 = mpimg.imread('GermanTrafficSigns/wildanimals31.jpg')
plt.figure()
plt.imshow(image5)
new_images.append(image5/3)
#new_images_gry = np.sum(new_images/3, axis=3, keepdims=True)
import glob
import matplotlib.image as mpimg
fig, axs = plt.subplots(2,4, figsize=(4, 2))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
my_images = []
for i, img in enumerate(glob.glob('GermanTrafficSigns/*x.jpg')):
image = cv2.imread(img)
axs[i].axis('off')
axs[i].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
my_images.append(image)
my_images = np.asarray(my_images)
my_images_gry = np.sum(my_images/3, axis=3, keepdims=True)
my_images_normalized = (my_images_gry - 128)/128
print(my_images_normalized.shape)
# Load our images first, and we'll check what we have
from glob import glob
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
image_paths = glob('images/*.jpg')
# Print out the image paths
print(image_paths)
# View an example of an image
example = mpimg.imread(image_paths[0])
plt.imshow(example)
plt.show()
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import tensorflow as tf
my_labels = [35,29,17, 27, 31]
# Check Test Accuracy
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
output_accuracy = evaluate(new_images, my_labels)
print("Test Accuracy = {:.3f}".format(output_accuracy[0]))
###Output
INFO:tensorflow:Restoring parameters from ./lenet
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = '../data/train.p'
validation_file='../data/valid.p'
testing_file = '../data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = (X_train.shape[1], X_train.shape[2])
# TODO: How many unique classes/labels there are in the dataset.
n_classes = 43
print('yomna', X_train.shape)
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
yomna (34799, 32, 32, 3)
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Visualizations will be shown in the notebook.
%matplotlib inline
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
import cv2
# for i in range(len(X_train)):
# X_train_[i] = cv2.cvtColor(X_train[i],cv2.COLOR_BGR2GRAY)
#X_train_ [1]= cv2.cvtColor(X_train[1],cv2.COLOR_BGR2GRAY)
print (X_train[1].shape)
plt.imshow(X_train[10])
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
X_train_=X_train
# X_train_ = np.ndarray(shape = (n_train, 32, 32, 3)).astype(np.float32)
# X_valid_ = np.ndarray(shape = (n_validation, 32, 32, 3)).astype(np.float32)
# X_test_= np.ndarray(shape = (n_test, 32, 32, 3)).astype(np.float32)
# for i in range(X_train):
# X_train_[i] = (X_train[i] - 128.0) / 128.0
# X_test_[i]=(X_test[i]-128)/128
# X_valid_[i]=(X_valid[i]-128)/128
import numpy as np
X_train_ = np.ndarray(shape = (n_train, 32, 32,3)).astype(np.float32)
for i in range (n_train) :
X_train_[i]=(X_train[i]-128)/128
test= cv2.cvtColor(X_train_[2],cv2.COLOR_RGB2GRAY)
import array as arr
X_train_preprocessed = np.ndarray(shape = (n_train, 32, 32)).astype(np.float32)
for i in range(n_train):
X_train_preprocessed[i]=test
for i in range(n_train):
X_train_preprocessed[i]= cv2.cvtColor(X_train_[i],cv2.COLOR_RGB2GRAY)
X_train_preprocessed[i]=np.X_train_preprocessed[i].reshape((32,32,1))
#X_train_preprocessed[10]= cv2.cvtColor(X_train_[10],cv2.COLOR_RGB2GRAY)
plt.imshow(X_train_preprocessed[10])
print(X_train_preprocessed.shape)
print(len(X_train_preprocessed))
# for i in range (len(X_train_)):
# X_train_ [i]/= np.std(X_train_[i], axis = 0)
### converting to grayscale, etc.
from sklearn.utils import shuffle
X_train_preprocessed, y_train = shuffle(X_train_preprocessed, y_train)
### Feel free to use as many code cells as needed.
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32. Output = 28x28x6.
layer1_w= tf.Variable(tf.truncated_normal((5,5,3,6), mean = mu, stddev = sigma))
layer1_b=tf.Variable(tf.zeros(6))
layer1_conv= tf.nn.conv2d(x,layer1_w,strides=[1, 1,1, 1], padding='VALID')
# TODO: Activation. and adding the bias
layer1_conv= tf.nn.relu(tf.nn.bias_add(layer1_conv,layer1_b))
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
layer1_output= tf.nn.max_pool(layer1_conv,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
layer2_w = tf.Variable(tf.truncated_normal((5,5,6,16), mean=mu, stddev= sigma))
layer2_b= tf.Variable(tf.zeros(16))
layer2_conv= tf.nn.conv2d(layer1_output,layer2_w,strides=[1,1,1,1],padding='VALID')+layer2_b
# TODO: Activation.
layer2_conv=tf.nn.relu(layer2_conv)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
layer2_output= tf.nn.max_pool( layer2_conv, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
layer2_output_flattened= tf.contrib.layers.flatten(layer2_output)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
layer3_w= tf.Variable(tf.truncated_normal((400,120),mean=mu, stddev=sigma))
layer3_b= tf.Variable(tf.zeros(120))
layer3_fc= tf.add(tf.matmul(layer2_output_flattened, layer3_w),layer3_b)
# TODO: Activation.
layer3_output= tf.nn.relu(layer3_fc)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
layer4_w= tf.Variable(tf.truncated_normal((120,84), mean=mu, stddev=sigma))
layer4_b = tf.Variable(tf.zeros(84))
layer4_fc= tf.add(tf.matmul(layer3_output,layer4_w),layer4_b)
# TODO: Activation.
layer4_output= tf.nn.relu(layer4_fc)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 43.
layer5_w= tf.Variable(tf.truncated_normal((84,43), mean=mu, stddev=sigma))
layer5_b = tf.Variable(tf.zeros(43))
layer5_fc= tf.add(tf.matmul(layer4_output,layer5_w),layer5_b)
logits=layer5_fc
return logits
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
x = tf.placeholder(tf.float32, (None, 32, 32))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
### Calculate and report the accuracy on the training and validation set.
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train_preprocessed, y_train= shuffle(X_train_preprocessed, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_preprocessed[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "../traffic-sign-dataset/train.p"
validation_file= "../traffic-sign-dataset/valid.p"
testing_file = "../traffic-sign-dataset/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
###Code
import numpy as np
print(X_train.shape)
print(y_train.shape)
print(X_valid.shape)
print(y_valid.shape)
print(X_test.shape)
print(y_test.shape)
###Output
(34799, 32, 32, 3)
(34799,)
(4410, 32, 32, 3)
(4410,)
(12630, 32, 32, 3)
(12630,)
###Markdown
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:4]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
def get_label_counts(labels):
label_count = {}
for label in labels:
if label not in label_count:
label_count[label] = 0
label_count[label] += 1
return sorted(label_count.items())
print(get_label_counts(y_train))
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
[(0, 180), (1, 1980), (2, 2010), (3, 1260), (4, 1770), (5, 1650), (6, 360), (7, 1290), (8, 1260), (9, 1320), (10, 1800), (11, 1170), (12, 1890), (13, 1920), (14, 690), (15, 540), (16, 360), (17, 990), (18, 1080), (19, 180), (20, 300), (21, 270), (22, 330), (23, 450), (24, 240), (25, 1350), (26, 540), (27, 210), (28, 480), (29, 240), (30, 390), (31, 690), (32, 210), (33, 599), (34, 360), (35, 1080), (36, 330), (37, 180), (38, 1860), (39, 270), (40, 300), (41, 210), (42, 210)]
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
import cv2
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
plt.show()
print(y_train[index])
gray_img = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
gray_img = (gray_img-np.mean(gray_img))/np.std(gray_img)
plt.imshow(gray_img,cmap="gray")
plt.show()
print("gray_img_shape:{}".format(gray_img.shape))
#print("gray_img_shape:{}".format(gray_img.reshape(32,32,1)))
#Translation
randX = random.randint(-2,2)
randY = random.randint(-2,2)
print("randx={}, randy={}".format(randX, randY))
M = np.float32([[1,0,randX],[0,1,randY]])
dst = cv2.warpAffine(gray_img,M,(32,32))
print(dst.shape)
plt.imshow(dst, cmap="gray")
plt.show()
#rotation
angle = random.uniform(-15,15)
print("angle={}".format(angle))
M = cv2.getRotationMatrix2D((32/2,32/2),angle,1)
dst = cv2.warpAffine(gray_img,M,(32,32))
print(dst.shape)
plt.imshow(dst)
plt.show()
#noise
noise = np.abs(np.random.randn(32, 32)*0.3)
dst = gray_img + noise
print(dst.shape)
plt.imshow(dst, cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
def preprocess(image):
#gray_img = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
#norm_img = (gray_img - np.mean(gray_img))/np.std(gray_img)
#norm_img = (gray_img - 128)/128
#return (norm_img.reshape(32,32,1))
x = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
#x = image
x = (x-np.mean(x))/np.std(x)
return x
def augmentation(composed_features, labels):
feat_ret = []
label_ret = []
for i in range(len(labels)):
label = labels[i]
feats = composed_features[i]
feat_ret.append(feats.reshape(32,32,1))
label_ret.append(label)
#Translation
#randX = random.randint(-5, 5)
#randY = random.randint(-5,5)
#M = np.float32([[1,0,randX],[0,1,randY]])
#translation_x = cv2.warpAffine(feats,M,(32,32))
#feat_ret.append(translation_x.reshape(32,32,1))
#label_ret.append(label)
#noise
noise = np.abs(np.random.randn(32, 32)*0.3)
noise_x = feats + noise
feat_ret.append(noise_x.reshape(32,32,1))
label_ret.append(label)
#rotation
angle = random.uniform(-15,15)
M = cv2.getRotationMatrix2D((32/2,32/2),angle,1)
rot_x = cv2.warpAffine(feats,M,(32,32))
feat_ret.append(rot_x.reshape(32,32,1))
label_ret.append(label)
return feat_ret, label_ret
print("train data size={}".format(len(X_train)))
preprocess_train = np.array([preprocess(img) for img in X_train])
X_train, y_train = augmentation(preprocess_train, y_train)
n_train = len(X_train)
X_valid = np.array([preprocess(img).reshape(32,32,1) for img in X_valid])
X_test = np.array([preprocess(img).reshape(32,32,1) for img in X_test])
print((X_train[0]).shape)
print("train data size={}".format(len(X_train)))
categoried_train_dat = {}
for i in range(len(X_train)):
label = y_train[i]
data = X_train[i]
#print(data.shape)
if label not in categoried_train_dat:
categoried_train_dat[label] = []
categoried_train_dat[label].append(data)
###Output
train data size=34799
(32, 32, 1)
train data size=104397
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
SEED = 2018
def weight_variable(shape,mean,stddev,name,seed=SEED):
init = tf.truncated_normal(shape,mean=mean,stddev=stddev,seed=SEED)
return tf.Variable(init,name=name)
def bias_variable(shape,init_value,name):
init = tf.constant(init_value,shape=shape)
return tf.Variable(init,name=name)
def conv2d(x,W,strides,padding,name):
return tf.nn.conv2d(x,W,strides=strides,padding=padding,name=name)
def max_2x2_pool(x,padding,name):
return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding=padding,name=name)
IMG_DEPTH = 1
mu =0
sigma = 0.05
bias_init = 0.05
learning_rate=0.001
epochs = 20
num_per_label = 3
keep_rate = 0.5
kp_conv = 0.6
batch_size = 128
weights ={
'W_conv1': weight_variable([3, 3, IMG_DEPTH, 80], mean=mu, stddev=sigma, name='W_conv1'),
'W_conv2': weight_variable([3, 3, 80, 120], mean=mu, stddev=sigma, name='W_conv2'),
'W_conv3': weight_variable([4, 4, 120, 180], mean=mu, stddev=sigma, name='W_conv3'),
'W_conv4': weight_variable([3, 3, 180, 200], mean=mu, stddev=sigma, name='W_conv4'),
'W_conv5': weight_variable([3, 3, 200, 200], mean=mu, stddev=sigma, name='W_conv5'),
'W_fc1': weight_variable([800, 80], mean=mu, stddev=sigma, name='W_fc1'),
'W_fc2': weight_variable([80, 80], mean=mu, stddev=sigma, name='W_fc2'),
'W_fc3': weight_variable([80, 43], mean=mu, stddev=sigma, name='W_fc3'),
}
biases = {
'b_conv1': bias_variable(shape=[80], init_value=bias_init, name='b_conv1'),
'b_conv2': bias_variable(shape=[120], init_value=bias_init, name='b_conv2'),
'b_conv3': bias_variable(shape=[180], init_value=bias_init, name='b_conv3'),
'b_conv4': bias_variable(shape=[200], init_value=bias_init, name='b_conv4'),
'b_conv5': bias_variable(shape=[200], init_value=bias_init, name='b_conv5'),
'b_fc1': bias_variable([80], init_value=bias_init, name='b_fc1'),
'b_fc2': bias_variable([80], init_value=bias_init, name='b_fc2'),
'b_fc3': bias_variable([43], init_value=bias_init, name='b_fc3'),
}
def traffic_model(x,keep_prob,keep_p_conv,weights,biases):
'''
ConvNet model for Traffic sign classifier
x - input image is tensor of shape(n_imgs,img_height,img_width,img_depth)
keep_prob - hyper parameter of the dropout operation
weights - dictionary of the weights for convolution layers and fully connected layers
biases dictionary of the biases for convolutional layers and fully connected layers
'''
# Convolutional block 1
conv1 = conv2d(x, weights['W_conv1'], strides=[1,1,1,1], padding='VALID', name='conv1_op')
conv1_act = tf.nn.relu(conv1 + biases['b_conv1'], name='conv1_act')
conv1_drop = tf.nn.dropout(conv1_act, keep_prob=k_p_conv, name='conv1_drop')
conv2 = conv2d(conv1_drop, weights['W_conv2'], strides=[1,1,1,1], padding='SAME', name='conv2_op')
conv2_act = tf.nn.relu(conv2 + biases['b_conv2'], name='conv2_act')
conv2_pool = max_2x2_pool(conv2_act, padding='VALID', name='conv2_pool')
pool2_drop = tf.nn.dropout(conv2_pool, keep_prob=k_p_conv, name='conv2_drop')
#Convolution block 2
conv3 = conv2d(pool2_drop, weights['W_conv3'], strides=[1,1,1,1], padding='VALID', name='conv3_op')
conv3_act = tf.nn.relu(conv3 + biases['b_conv3'], name='conv3_act')
conv3_drop = tf.nn.dropout(conv3_act, keep_prob=k_p_conv, name='conv3_drop')
conv4 = conv2d(conv3_drop, weights['W_conv4'], strides=[1,1,1,1], padding='SAME', name='conv4_op')
conv4_act = tf.nn.relu(conv4 + biases['b_conv4'], name='conv4_act')
conv4_pool = max_2x2_pool(conv4_act, padding='VALID', name='conv4_pool')
conv4_drop = tf.nn.dropout(conv4_pool, keep_prob, name='conv4_drop')
conv5 = conv2d(conv4_drop, weights['W_conv5'], strides=[1,1,1,1], padding='VALID', name='conv5_op')
conv5_act = tf.nn.relu(conv5 + biases['b_conv5'], name='conv5_act')
conv5_pool = max_2x2_pool(conv5_act, padding='VALID', name='conv5_pool')
conv5_drop = tf.nn.dropout(conv5_pool, keep_prob, name='conv5_drop')
fc0 = flatten(conv5_drop)
fc1 = tf.nn.relu( tf.matmul( fc0, weights['W_fc1'] ) + biases['b_fc1'], name='fc1' )
fc1_drop = tf.nn.dropout(fc1, keep_prob, name='fc1_drop')
fc2 = tf.nn.relu( tf.matmul( fc1_drop, weights['W_fc2'] ) + biases['b_fc2'], name='fc2' )
fc2_drop = tf.nn.dropout(fc2, keep_prob, name='fc2_drop')
logits = tf.add(tf.matmul(fc2_drop, weights['W_fc3']),biases['b_fc3'], name='logits')
return logits
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
# tf Graph input
from tensorflow.contrib.layers import flatten
from sklearn.utils import shuffle
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, batch_size):
batch_x, batch_y = X_data[offset:offset+batch_size], y_data[offset:offset+batch_size]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, k_p_conv:1,keep_prob:1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
k_p_conv = tf.placeholder(tf.float32,name='k_p_conv')
x = tf.placeholder(tf.float32, [None, 32, 32, 1], name="x")
y = tf.placeholder(tf.int32, [None], name="y")
one_hot_y = tf.one_hot(y, n_classes, name="one_hot_y")
keep_prob = tf.placeholder(tf.float32)
# Model
logits = traffic_model(x, keep_prob, k_p_conv, weights, biases)
# Define loss and optimizer
regularizer = 0
for name, w in weights.items():
regularizer += tf.nn.l2_loss(w)
for name, b in biases.items():
regularizer += tf.nn.l2_loss(b)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_y)
#+ regularization_param * regularizer
)
#optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
#optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate).minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
"""
X_train, y_train = shuffle(X_train, y_train)
loss = 0
train_acc = 0
for batch in range(n_train//batch_size):
batch_x = X_train[batch*batch_size:(batch+1)*batch_size]
batch_y = y_train[batch*batch_size:(batch+1)*batch_size]
#print(batch_x.shape)
#xxx = sess.run(logits, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
#print(xxx.shape)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})
loss += sess.run(cost, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
"""
loss = 0
for batch in range(n_train//(num_per_label*n_classes)):
train_x = []
train_y = []
for label, imagelst in categoried_train_dat.items():
dataset = shuffle(imagelst)[:num_per_label]
train_y.extend([label for i in range(len(dataset))])
train_x.extend(dataset)
train_x, train_y = shuffle(train_x, train_y)
sess.run(optimizer, feed_dict={x: train_x, y: train_y, keep_prob:keep_rate, k_p_conv:kp_conv})
loss += sess.run(cost, feed_dict={x: train_x, y: train_y, k_p_conv:1,keep_prob:1})
#train_acc = evaluate(preprocessed_train, y_train)
valid_acc = evaluate(X_valid, y_valid)
print('Epoch {:>2}, Batch {:>3} - Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
epoch + 1,
batch + 1,
loss,
valid_acc,
))
# Calculate Test Accuracy
test_acc = evaluate(X_test, y_test)
print('Testing Accuracy: {}'.format(test_acc))
print("done")
###Output
WARNING:tensorflow:From <ipython-input-7-9dd85b0c1135>:39: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See @{tf.nn.softmax_cross_entropy_with_logits_v2}.
Epoch 1, Batch 809 - Loss: 1801.4241 Validation Accuracy: 0.665760
Epoch 2, Batch 809 - Loss: 335.1975 Validation Accuracy: 0.898639
Epoch 3, Batch 809 - Loss: 121.1222 Validation Accuracy: 0.970295
Epoch 4, Batch 809 - Loss: 49.7612 Validation Accuracy: 0.988662
Epoch 5, Batch 809 - Loss: 26.5247 Validation Accuracy: 0.987075
Epoch 6, Batch 809 - Loss: 19.8635 Validation Accuracy: 0.986621
Epoch 7, Batch 809 - Loss: 15.7582 Validation Accuracy: 0.993424
Epoch 8, Batch 809 - Loss: 14.0208 Validation Accuracy: 0.995692
Epoch 9, Batch 809 - Loss: 13.7380 Validation Accuracy: 0.994785
Epoch 10, Batch 809 - Loss: 11.3497 Validation Accuracy: 0.991837
Epoch 11, Batch 809 - Loss: 10.9467 Validation Accuracy: 0.995011
Epoch 12, Batch 809 - Loss: 10.5471 Validation Accuracy: 0.989116
Epoch 13, Batch 809 - Loss: 10.4905 Validation Accuracy: 0.990249
Epoch 14, Batch 809 - Loss: 9.3275 Validation Accuracy: 0.994331
Epoch 15, Batch 809 - Loss: 9.1258 Validation Accuracy: 0.995465
Epoch 16, Batch 809 - Loss: 7.7087 Validation Accuracy: 0.995692
Epoch 17, Batch 809 - Loss: 7.7437 Validation Accuracy: 0.996145
Epoch 18, Batch 809 - Loss: 7.1070 Validation Accuracy: 0.995918
Epoch 19, Batch 809 - Loss: 8.8863 Validation Accuracy: 0.990023
Epoch 20, Batch 809 - Loss: 8.7731 Validation Accuracy: 0.987755
Testing Accuracy: 0.9662707837724723
done
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
--- Step 0: Load The Data
###Code
import tensorflow as tf
import keras
import pickle
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "dataset/train.p"
validation_file= "dataset/valid.p"
testing_file = "dataset/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Print the shape of variables
print(X_train.shape)
print(y_train.shape)
###Output
(34799, 32, 32, 3)
(34799,)
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training example
n_train = X_train.shape[0]
# TODO: Number of validation example
n_validation = X_valid.shape[0]
# TODO: Number of testing example.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
from PIL import Image
import numpy as np
import random
from PIL import Image, ImageEnhance
# Visualizations will be shown in the notebook.
%matplotlib inline
# Load name of id
with open("signnames.csv", "r") as f:
signnames = f.read()
id_to_name = { int(line.split(",")[0]):line.split(",")[1] for line in signnames.split("\n")[1:] if len(line) > 0}
graph_size = 3
random_index_list = [random.randint(0, X_train.shape[0]) for _ in range(graph_size * graph_size)]
fig = plt.figure(figsize=(15, 15))
for i, index in enumerate(random_index_list):
a=fig.add_subplot(graph_size, graph_size, i+1)
#im = Image.fromarray(np.rollaxis(X_train[index] * 255, 0,3))
imgplot = plt.imshow(X_train[index])
# Plot some images
a.set_title('%s' % id_to_name[y_train[index]])
plt.show()
fig, ax = plt.subplots()
# the histogram of the data
values, bins, patches = ax.hist(y_train, n_classes, normed=10)
# add a 'best fit' line
ax.set_xlabel('Smarts')
ax.set_title(r'Histogram of classess')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
print ("Most common index")
most_common_index = sorted(range(len(values)), key=lambda k: values[k], reverse=True)
for index in most_common_index[:10]:
print("index: %s => %s = %s" % (index, id_to_name[index], values[index]))
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
# I used keras only for the ImageDataGenerator
from keras.preprocessing.image import ImageDataGenerator
X_train = X_train / 255
X_valid = X_valid / 255
X_test = X_test / 255
def preprocessing_function(img):
"""
Custom preprocessing_function
"""
img = img * 255
img = Image.fromarray(img.astype('uint8'), 'RGB')
img = ImageEnhance.Brightness(img).enhance(random.uniform(0.6, 1.5))
img = ImageEnhance.Contrast(img).enhance(random.uniform(0.6, 1.5))
return np.array(img) / 255
train_datagen = ImageDataGenerator()
train_datagen_augmented = ImageDataGenerator(
rotation_range=20,
shear_range=0.2,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocessing_function)
inference_datagen = ImageDataGenerator()
train_datagen.fit(X_train)
train_datagen_augmented.fit(X_train)
inference_datagen.fit(X_valid)
inference_datagen.fit(X_test)
###Output
_____no_output_____
###Markdown
Example of augmented images
###Code
fig = plt.figure()
n = 0
graph_size = 3
for x_batch, y_batch in train_datagen_augmented.flow(X_train, y_train, batch_size=1):
a=fig.add_subplot(graph_size, graph_size, n+1)
imgplot = plt.imshow(x_batch[0])
n = n + 1
if n > 8:
break
plt.show()
###Output
_____no_output_____
###Markdown
Model Architecture CapsNet
###Code
import numpy as np
import tensorflow as tf
import numpy as np
def conv_caps_layer(input_layer, capsules_size, nb_filters, kernel, stride=2):
"""
Capsule layer for the convolutional inputs
**input:
*input_layer: (Tensor)
*capsule_numbers: (Integer) the number of capsule in this layer.
*kernel_size: (Integer) Size of the kernel for each filter.
*stride: (Integer) 2 by default
"""
# "In convolutional capsule layers each unit in a capsule is a convolutional unit.
# Therefore, each capsule will output a grid of vectors rather than a single vector output."
capsules = tf.contrib.layers.conv2d(
input_layer, nb_filters * capsules_size, kernel, stride, padding="VALID")
# conv shape: [?, kernel, kernel, nb_filters]
shape = capsules.get_shape().as_list()
capsules = tf.reshape(capsules, shape=(-1, np.prod(shape[1:3]) * nb_filters, capsules_size, 1))
# capsules shape: [?, nb_capsules, capsule_size, 1]
return squash(capsules)
def routing(u_hat, b_ij, nb_capsules, nb_capsules_p, iterations=4):
"""
Routing algorithm
**input:
*u_hat: Dot product (weights between previous capsule and current capsule)
*b_ij: the log prior probabilities that capsule i should be coupled to capsule j
*nb_capsules_p: Number of capsule in the previous layer
*nb_capsules: Number of capsule in this layer
"""
# Start the routing algorithm
for it in range(iterations):
with tf.variable_scope('routing_' + str(it)):
# Line 4 of algo
# probabilities that capsule i should be coupled to capsule j.
# c_ij: [nb_capsules_p, nb_capsules, 1, 1]
c_ij = tf.nn.softmax(b_ij, dim=2)
# Line 5 of algo
# c_ij: [ nb_capsules_p, nb_capsules, 1, 1]
# u_hat: [?, nb_capsules_p, nb_capsules, len_v_j, 1]
s_j = tf.multiply(c_ij, u_hat)
# s_j: [?, nb_capsules_p, nb_capsules, len_v_j, 1]
s_j = tf.reduce_sum(s_j, axis=1, keep_dims=True)
# s_j: [?, 1, nb_capsules, len_v_j, 1)
# line 6:
# squash using Eq.1,
v_j = squash(s_j)
# v_j: [1, 1, nb_capsules, len_v_j, 1)
# line 7:
# Frist reshape & tile v_j
# [? , 1, nb_capsules, len_v_j, 1] ->
# [?, nb_capsules_p, nb_capsules, len_v_j, 1]
v_j_tiled = tf.tile(v_j, [1, nb_capsules_p, 1, 1, 1])
# u_hat: [?, nb_capsules_p, nb_capsules, len_v_j, 1]
# v_j_tiled [1, nb_capsules_p, nb_capsules, len_v_j, 1]
u_dot_v = tf.matmul(u_hat, v_j_tiled, transpose_a=True)
# u_produce_v: [?, nb_capsules_p, nb_capsules, 1, 1]
b_ij += tf.reduce_sum(u_dot_v, axis=0, keep_dims=True)
#b_ih: [1, nb_capsules_p, nb_capsules, 1, 1]
return tf.squeeze(v_j, axis=1)
def fully_connected_caps_layer(input_layer, capsules_size, nb_capsules, iterations=4):
"""
Second layer receiving inputs from all capsules of the layer below
**input:
*input_layer: (Tensor)
*capsules_size: (Integer) Size of each capsule
*nb_capsules: (Integer) Number of capsule
*iterations: (Integer) Number of iteration for the routing algorithm
i refer to the layer below.
j refer to the layer above (the current layer).
"""
shape = input_layer.get_shape().as_list()
# Get the size of each capsule in the previous layer and the current layer.
len_u_i = np.prod(shape[2])
len_v_j = capsules_size
# Get the number of capsule in the layer bellow.
nb_capsules_p = np.prod(shape[1])
# w_ij: Used to compute u_hat by multiplying the output ui of a capsule in the layer below
# with this matrix
# [nb_capsules_p, nb_capsules, len_v_j, len_u_i]
_init = tf.random_normal_initializer(stddev=0.01, seed=0)
_shape = (nb_capsules_p, nb_capsules, len_v_j, len_u_i)
w_ij = tf.get_variable('weight', shape=_shape, dtype=tf.float32, initializer=_init)
# Adding one dimension to the input [batch_size, nb_capsules_p, length(u_i), 1] ->
# [batch_size, nb_capsules_p, 1, length(u_i), 1]
# To allow the next dot product
input_layer = tf.reshape(input_layer, shape=(-1, nb_capsules_p, 1, len_u_i, 1))
input_layer = tf.tile(input_layer, [1, 1, nb_capsules, 1, 1])
# Eq.2, calc u_hat
# Prediction uj|i made by capsule i
# w_ij: [ nb_capsules_p, nb_capsules, len_v_j, len_u_i, ]
# input: [batch_size, nb_capsules_p, nb_capsules, len_ui, 1]
# u_hat: [batch_size, nb_capsules_p, nb_capsules, len_v_j, 1]
# Each capsule of the previous layer capsule layer is associated to a capsule of this layer
u_hat = tf.einsum('abdc,iabcf->iabdf', w_ij, input_layer)
# bij are the log prior probabilities that capsule i should be coupled to capsule j
# [nb_capsules_p, nb_capsules, 1, 1]
b_ij = tf.zeros(shape=[nb_capsules_p, nb_capsules, 1, 1], dtype=np.float32)
return routing(u_hat, b_ij, nb_capsules, nb_capsules_p, iterations=iterations)
def squash(vector):
"""
Squashing function corresponding to Eq. 1
**input: **
*vector
"""
vector += 0.00001 # Workaround for the squashing function ...
vec_squared_norm = tf.reduce_sum(tf.square(vector), -2, keep_dims=True)
scalar_factor = vec_squared_norm / (1 + vec_squared_norm) / tf.sqrt(vec_squared_norm)
vec_squashed = scalar_factor * vector # element-wise
return(vec_squashed)
###Output
_____no_output_____
###Markdown
Main Model
###Code
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import numpy as np
from model_base import ModelBase
import tensorflow as tf
class ModelTrafficSign(ModelBase):
"""
ModelTrafficSign.
This class is used to create the conv graph using:
Dynamic Routing Between Capsules
"""
# Numbers of label to predict
NB_LABELS = 43
def __init__(self, model_name, output_folder):
"""
**input:
*model_name: (Integer) Name of this model
*output_folder: Output folder to saved data (tensorboard, checkpoints)
"""
ModelBase.__init__(self, model_name, output_folder=output_folder)
def _build_inputs(self):
"""
Build tensorflow inputs
(Placeholder)
**return: **
*tf_images: Images Placeholder
*tf_labels: Labels Placeholder
"""
# Images 32*32*3
tf_images = tf.placeholder(tf.float32, [None, 32, 32, 3], name='images')
# Labels: [0, 1, 6, 20, ...]
tf_labels = tf.placeholder(tf.int64, [None], name='labels')
return tf_images, tf_labels
def _build_main_network(self, images, conv_2_dropout):
"""
This method is used to create the two convolutions and the CapsNet on the top
**input:
*images: Image PLaceholder
*conv_2_dropout: Dropout value placeholder
**return: **
*Caps1: Output of first Capsule layer
*Caps2: Output of second Capsule layer
"""
# First BLock:
# Layer 1: Convolution.
shape = (self.h.conv_1_size, self.h.conv_1_size, 3, self.h.conv_1_nb)
conv1 = self._create_conv(self.tf_images, shape, relu=True, max_pooling=False, padding='VALID')
# Layer 2: Convolution.
#shape = (self.h.conv_2_size, self.h.conv_2_size, self.h.conv_1_nb, self.h.conv_2_nb)
#conv2 = self._create_conv(conv1, shape, relu=True, max_pooling=False, padding='VALID')
conv1 = tf.nn.dropout(conv1, keep_prob=conv_2_dropout)
# Create the first capsules layer
caps1 = conv_caps_layer(
input_layer=conv1,
capsules_size=self.h.caps_1_vec_len,
nb_filters=self.h.caps_1_nb_filter,
kernel=self.h.caps_1_size)
# Create the second capsules layer used to predict the output
caps2 = fully_connected_caps_layer(
input_layer=caps1,
capsules_size=self.h.caps_2_vec_len,
nb_capsules=self.NB_LABELS,
iterations=self.h.routing_steps)
return caps1, caps2
def _build_decoder(self, caps2, one_hot_labels, batch_size):
"""
Build the decoder part from the last capsule layer
**input:
*Caps2: Output of second Capsule layer
*one_hot_labels
*batch_size
"""
labels = tf.reshape(one_hot_labels, (-1, self.NB_LABELS, 1))
# squeeze(caps2): [?, len_v_j, capsules_nb]
# labels: [?, NB_LABELS, 1] with capsules_nb == NB_LABELS
mask = tf.matmul(tf.squeeze(caps2), labels, transpose_a=True)
# Select the good capsule vector
capsule_vector = tf.reshape(mask, shape=(batch_size, self.h.caps_2_vec_len))
# capsule_vector: [?, len_v_j]
# Reconstruct image
fc1 = tf.contrib.layers.fully_connected(capsule_vector, num_outputs=400)
fc1 = tf.reshape(fc1, shape=(batch_size, 5, 5, 16))
upsample1 = tf.image.resize_nearest_neighbor(fc1, (8, 8))
conv1 = tf.layers.conv2d(upsample1, 4, (3,3), padding='same', activation=tf.nn.relu)
upsample2 = tf.image.resize_nearest_neighbor(conv1, (16, 16))
conv2 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
upsample3 = tf.image.resize_nearest_neighbor(conv2, (32, 32))
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# 3 channel for RGG
logits = tf.layers.conv2d(conv6, 3, (3,3), padding='same', activation=None)
decoded = tf.nn.sigmoid(logits, name='decoded')
tf.summary.image('reconstruction_img', decoded)
return decoded
def init(self):
"""
Init the graph
"""
# Get graph inputs
self.tf_images, self.tf_labels = self._build_inputs()
# Dropout inputs
self.tf_conv_2_dropout = tf.placeholder(tf.float32, shape=(), name='conv_2_dropout')
# Dynamic batch size
batch_size = tf.shape(self.tf_images)[0]
# Translate labels to one hot array
one_hot_labels = tf.one_hot(self.tf_labels, depth=self.NB_LABELS)
# Create the first convolution and the CapsNet
self.tf_caps1, self.tf_caps2 = self._build_main_network(self.tf_images, self.tf_conv_2_dropout)
# Build the images reconstruction
self.tf_decoded = self._build_decoder(self.tf_caps2, one_hot_labels, batch_size)
# Build the loss
_loss = self._build_loss(
self.tf_caps2, one_hot_labels, self.tf_labels, self.tf_decoded, self.tf_images)
(self.tf_loss_squared_rec, self.tf_margin_loss_sum, self.tf_predicted_class,
self.tf_correct_prediction, self.tf_accuracy, self.tf_loss, self.tf_margin_loss,
self.tf_reconstruction_loss) = _loss
# Build optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=self.h.learning_rate)
self.tf_optimizer = optimizer.minimize(self.tf_loss, global_step=tf.Variable(0, trainable=False))
# Log value into tensorboard
tf.summary.scalar('margin_loss', self.tf_margin_loss)
tf.summary.scalar('accuracy', self.tf_accuracy)
tf.summary.scalar('total_loss', self.tf_loss)
tf.summary.scalar('reconstruction_loss', self.tf_reconstruction_loss)
self.tf_test = tf.random_uniform([2], minval=0, maxval=None, dtype=tf.float32, seed=None, name="tf_test")
self.init_session()
def _build_loss(self, caps2, one_hot_labels, labels, decoded, images):
"""
Build the loss of the graph
"""
# Get the length of each capsule
capsules_length = tf.sqrt(tf.reduce_sum(tf.square(caps2), axis=2, keep_dims=True))
max_l = tf.square(tf.maximum(0., 0.9 - capsules_length))
max_l = tf.reshape(max_l, shape=(-1, self.NB_LABELS))
max_r = tf.square(tf.maximum(0., capsules_length - 0.1))
max_r = tf.reshape(max_r, shape=(-1, self.NB_LABELS))
t_c = one_hot_labels
m_loss = t_c * max_l + 0.5 * (1 - t_c) * max_r
margin_loss_sum = tf.reduce_sum(m_loss, axis=1)
margin_loss = tf.reduce_mean(margin_loss_sum)
# Reconstruction loss
loss_squared_rec = tf.square(decoded - images)
reconstruction_loss = tf.reduce_mean(loss_squared_rec)
# 3. Total loss
loss = margin_loss + (0.0005 * reconstruction_loss)
# Accuracy
predicted_class = tf.argmax(capsules_length, axis=1)
predicted_class = tf.reshape(predicted_class, [tf.shape(capsules_length)[0]])
correct_prediction = tf.equal(predicted_class, labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return (loss_squared_rec, margin_loss_sum, predicted_class, correct_prediction, accuracy,
loss, margin_loss, reconstruction_loss)
def optimize(self, images, labels, tb_save=True):
"""
Train the model
**input: **
*images: Image to train the model on
*labels: True classes
*tb_save: (Boolean) Log this optimization in tensorboard
**return: **
Loss: The loss of the model on this batch
Acc: Accuracy of the model on this batch
"""
tensors = [self.tf_optimizer, self.tf_margin_loss, self.tf_accuracy, self.tf_tensorboard]
_, loss, acc, summary = self.sess.run(tensors,
feed_dict={
self.tf_images: images,
self.tf_labels: labels,
self.tf_conv_2_dropout: self.h.conv_2_dropout
})
if tb_save:
# Write data to tensorboard
self.train_writer.add_summary(summary, self.train_writer_it)
self.train_writer_it += 1
return loss, acc
def evaluate(self, images, labels, tb_train_save=False, tb_test_save=False):
"""
Evaluate dataset
**input: **
*images: Image to train the model on
*labels: True classes
*tb_train_save: (Boolean) Log this optimization in tensorboard under the train part
*tb_test_save: (Boolean) Log this optimization in tensorboard under the test part
**return: **
Loss: The loss of the model on this batch
Acc: Accuracy of the model on this batch
"""
tensors = [self.tf_margin_loss, self.tf_accuracy, self.tf_tensorboard]
loss, acc, summary = self.sess.run(tensors,
feed_dict={
self.tf_images: images,
self.tf_labels: labels,
self.tf_conv_2_dropout: 1.
})
if tb_test_save:
# Write data to tensorboard
self.test_writer.add_summary(summary, self.test_writer_it)
self.test_writer_it += 1
if tb_train_save:
# Write data to tensorboard
self.train_writer.add_summary(summary, self.train_writer_it)
self.train_writer_it += 1
return loss, acc
def predict(self, images):
"""
Method used to predict a class
Return a softmax
**input: **
*images: Image to train the model on
**return:
*softmax: Softmax between all capsules
"""
tensors = [self.tf_caps2]
caps2 = self.sess.run(tensors,
feed_dict={
self.tf_images: images,
self.tf_conv_2_dropout: 1.
})[0]
# tf.sqrt(tf.reduce_sum(tf.square(caps2), axis=2, keep_dims=True))
caps2 = np.sqrt(np.sum(np.square(caps2), axis=2, keepdims=True))
caps2 = np.reshape(caps2, (len(images), self.NB_LABELS))
# softmax
softmax = np.exp(caps2) / np.sum(np.exp(caps2), axis=1, keepdims=True)
return softmax
def reconstruction(self, images, labels):
"""
Method used to get the reconstructions given a batch
Return the result as a softmax
**input: **
*images: Image to train the model on
*labels: True classes
"""
tensors = [self.tf_decoded]
decoded = self.sess.run(tensors,
feed_dict={
self.tf_images: images,
self.tf_labels: labels,
self.tf_conv_2_dropout: 1.
})[0]
return decoded
def evaluate_dataset(self, images, labels, batch_size=10):
"""
Evaluate a full dataset
This method is used to fully evaluate the dataset batch per batch. Useful when
the dataset can't be fit inside to the GPU.
*input: **
*images: Image to train the model on
*labels: True classes
*return: **
*loss: Loss overall your dataset
*accuracy: Accuracy overall your dataset
*predicted_class: Predicted class
"""
tensors = [self.tf_loss_squared_rec, self.tf_margin_loss_sum, self.tf_correct_prediction,
self.tf_predicted_class]
loss_squared_rec_list = None
margin_loss_sum_list = None
correct_prediction_list = None
predicted_class = None
b = 0
for batch in self.get_batches([images, labels], batch_size, shuffle=False):
images_batch, labels_batch = batch
loss_squared_rec, margin_loss_sum, correct_prediction, classes = self.sess.run(tensors,
feed_dict={
self.tf_images: images_batch,
self.tf_labels: labels_batch,
self.tf_conv_2_dropout: 1.
})
if loss_squared_rec_list is not None:
predicted_class = np.concatenate((predicted_class, classes))
loss_squared_rec_list = np.concatenate((loss_squared_rec_list, loss_squared_rec))
margin_loss_sum_list = np.concatenate((margin_loss_sum_list, margin_loss_sum))
correct_prediction_list = np.concatenate((correct_prediction_list, correct_prediction))
else:
predicted_class = classes
loss_squared_rec_list = loss_squared_rec
margin_loss_sum_list = margin_loss_sum
correct_prediction_list = correct_prediction
b += batch_size
margin_loss = np.mean(margin_loss_sum_list)
reconstruction_loss = np.mean(loss_squared_rec_list)
accuracy = np.mean(correct_prediction_list)
loss = margin_loss
return loss, accuracy, predicted_class
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
# Init model
model = ModelTrafficSign("TrafficSign", output_folder="outputs")
model.init()
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
BATCH_SIZE = 20
# Utils method to print the current progression
def plot_progression(b, cost, acc, label): print(
"[%s] Batch ID = %s, loss = %s, acc = %s" % (label, b, cost, acc))
# Training pipeline
b = 0
valid_batch = inference_datagen.flow(X_valid, y_valid, batch_size=BATCH_SIZE)
best_validation_loss = None
augmented_factor = 0.99
decrease_factor = 0.90
train_batches = train_datagen.flow(X_train, y_train, batch_size=BATCH_SIZE)
augmented_train_batches = train_datagen_augmented.flow(X_train, y_train, batch_size=BATCH_SIZE)
while True:
next_batch = next(
augmented_train_batches if random.uniform(0, 1) < augmented_factor else train_batches)
x_batch, y_batch = next_batch
### Training
cost, acc = model.optimize(x_batch, y_batch)
### Validation
x_batch, y_batch = next(valid_batch, None)
# Retrieve the cost and acc on this validation batch and save it in tensorboard
cost_val, acc_val = model.evaluate(x_batch, y_batch, tb_test_save=True)
if b % 10 == 0: # Plot the last results
plot_progression(b, cost, acc, "Train")
plot_progression(b, cost_val, acc_val, "Validation")
if b % 1000 == 0: # Test the model on all the validation
print("Evaluate full validation dataset ...")
loss, acc, _ = model.evaluate_dataset(X_valid, y_valid)
print("Current loss: %s Best loss: %s" % (loss, best_validation_loss))
plot_progression(b, loss, acc, "TOTAL Validation")
if best_validation_loss is None or loss < best_validation_loss:
best_validation_loss = loss
model.save()
augmented_factor = augmented_factor * decrease_factor
print("Augmented Factor = %s" % augmented_factor)
b += 1
# Test the model on the test set
# Evaluate all the dataset
loss, acc, predicted_class = model.evaluate_dataset(X_test, y_test)
print("Test Accuracy = ", acc)
print("Test Loss = ", loss)
###Output
Test Accuracy = 0.967062549485
Test Loss = 0.0440878
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
images = []
# Read all image into the folder
for filename in os.listdir("from_web"):
img = Image.open(os.path.join("from_web", filename))
img = img.resize((32, 32))
plt.imshow(img)
plt.show()
img = np.array(img) / 255
images.append(img)
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
# Get the prediction
predictions = model.predict(images)
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
# Plot the result
fig, axs = plt.subplots(5, 2, figsize=(10, 25))
axs = axs.ravel()
for i in range(10):
if i%2 == 0:
axs[i].axis('off')
axs[i].imshow(images[i // 2])
axs[i].set_title("Prediction: %s" % id_to_name[np.argmax(predictions[i // 2])])
else:
axs[i].bar(np.arange(43), predictions[i // 2])
axs[i].set_ylabel("Softmax")
axs[i].set_xlabel("Labels")
plt.show()
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
import csv
import random
import numpy as np
from numpy import newaxis
import tensorflow as tf
from tensorflow.contrib.layers import flatten
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.utils import shuffle
from scipy.misc import imread, imsave, imresize
from skimage import exposure
import warnings
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'train.p'
validation_file='valid.p'
testing_file = 'test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
print("X_train shape:", X_train.shape)
print("y_train shape:", y_train.shape)
print("X_test shape:", X_test.shape)
print("y_test shape:", y_test.shape)
###Output
X_train shape: (34799, 32, 32, 3)
y_train shape: (34799,)
X_test shape: (12630, 32, 32, 3)
y_test shape: (12630,)
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
i=1
for index in unique_images_start_indices:
plt.subplot(8,6,i)
#plt.xlabel(str(sign_names[str(y_train[index])]))
plt.imshow(X_train[index])
#plt.xlabel(str(sign_names[str(y_train[index])]))
#figure.savefig('unique_traffic_signs/'+str(unique_image_labels[i-1]))
i=i+1
plt.show()
###Output
_____no_output_____
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
for i in range(10):
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
# histogram of label frequency
hist, bins = np.histogram(y_train, bins=n_classes)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
plt.show()
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
gray=gray[:,:,newaxis]
#print(gray.shape)
return gray
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
### Preprocess the data here.
### Feel free to use as many code cells as needed.
# Convert to grayscale
X_train_rgb = X_train
X_train_gry = np.sum(X_train/3, axis=3, keepdims=True)
X_valid_rgb = X_valid
X_valid_gry = np.sum(X_valid/3, axis=3, keepdims=True)
X_test_rgb = X_test
X_test_gry = np.sum(X_test/3, axis=3, keepdims=True)
print('RGB shape:', X_train_rgb.shape)
print('Grayscale shape:', X_train_gry.shape)
X_train = X_train_gry
X_test = X_test_gry
X_valid=X_valid_gry
print('xtrain: ', X_train.shape)
X_train = (X_train - 128)/128
X_valid= (X_valid - 128)/128
X_test = (X_test - 128)/128
print(np.mean(X_train))
print(np.mean(X_test))
for i in range(2):
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image,plt.gray())
print(y_train[index])
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y,n_classes)
print(n_classes)
### Define your architecture here.
### Feel free to use as many code cells as needed.
EPOCHS = 50
BATCH_SIZE = 128
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
cw1 = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
cb1 = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, cw1, strides=[1, 1, 1, 1], padding='VALID') + cb1
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: 14x14x6 Convolutional. Output = 10x10x16.
cw2 = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
cb2 = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, cw2, strides=[1, 1, 1, 1], padding='VALID') + cb2
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
Training_accuracy = evaluate(X_train, y_train)
print("EPOCH {} ...".format(i+1))
print("Training_accuracy = {:.3f}".format(Training_accuracy))
print()
validation_accuracy = evaluate(X_valid, y_valid)
#print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "./train.p"
validation_file="./valid.p"
testing_file = "./test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
#print(y_test.shape)
#print("done step0")
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_test.shape[1:3]
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).shape[0]
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
import cv2
# Visualizations will be shown in the notebook.
%matplotlib inline
X_train2 = X_train.copy() #was meant for data augmentation
y_train2 = y_train.copy() #was meant for data augmentation
index = random.randint(0,n_train)
image = X_train[index].squeeze()
testM = cv2.getRotationMatrix2D((16,16),20,1)
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index])
plt.figure()
trainFrequency = np.bincount(y_train)
plt.plot(trainFrequency)
plt.axis([0,42,0,2500])
plt.title("Distribution of values in training set")
plt.xlabel("class value")
plt.ylabel("frequency")
plt.show()
###Output
28
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
import tensorflow as tf
EPOCHS = 30
BATCH_SIZE = 128
def LeNet_mod(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
depth1 = 16
conv1_W = tf.Variable(tf.truncated_normal(shape=(5,5, 3, depth1), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(depth1))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
poolz = 3
pools = 2
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
#conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
depth2 = 20
conv2_W = tf.Variable(tf.truncated_normal(shape=(3, 3, depth1, depth2), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(depth2))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
#conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
depth3 = 20
conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, int(depth2), depth3), mean = mu, stddev = sigma))
conv3_b = tf.Variable(tf.zeros(depth3))
conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b
# SOLUTION: Activation.
conv3 = tf.nn.relu(conv3)
conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
#conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
depth4 = 20
conv4_W = tf.Variable(tf.truncated_normal(shape=(3, 3, int(depth3), depth4), mean = mu, stddev = sigma))
conv4_b = tf.Variable(tf.zeros(depth3))
conv4 = tf.nn.conv2d(conv3, conv4_W, strides=[1, 1, 1, 1], padding='VALID') + conv4_b
# SOLUTION: Activation.
conv4 = tf.nn.relu(conv4)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
#conv4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
depth5 = 20
conv5_W = tf.Variable(tf.truncated_normal(shape=(3, 3, int(depth4), depth5), mean = mu, stddev = sigma))
conv5_b = tf.Variable(tf.zeros(depth5))
conv5 = tf.nn.conv2d(conv4, conv5_W, strides=[1, 1, 1, 1], padding='VALID') + conv5_b
# SOLUTION: Activation.
conv5 = tf.nn.relu(conv5)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
#conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
depth6 = 20
conv6_W = tf.Variable(tf.truncated_normal(shape=(3, 3, int(depth5), depth6), mean = mu, stddev = sigma))
conv6_b = tf.Variable(tf.zeros(depth6))
conv6 = tf.nn.conv2d(conv5, conv6_W, strides=[1, 1, 1, 1], padding='VALID') + conv6_b
# SOLUTION: Activation.
conv6 = tf.nn.relu(conv6)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv6 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv6)
print(fc0.get_shape())
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(int(fc0.get_shape()[1]), 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
rate = 0.001
logits = LeNet_mod(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate,epsilon=1e-4)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#saver.restore(sess, "./lenet")
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './checkpoints2/3_11_2017_6_12pm.ckpt')
print("Model saved")
#tf.reset_default_graph()
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./checkpoints2'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
###Output
Test Accuracy = 0.934
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
im1_data = cv2.imread("./Internet_Pictures/01_01_edited.jpg")
im1_class = 1
im1_data = np.expand_dims(im1_data,axis=0)
im1_class = np.expand_dims(im1_class,axis=0)
#print(im1_data.shape)
im2_data = cv2.imread("./Internet_Pictures/02_08_edited.jpg")
im2_class = 8
im2_data = np.expand_dims(im2_data,axis=0)
im2_class = np.expand_dims(im2_class,axis=0)
#print(im2_data.shape)
im3_data = cv2.imread("./Internet_Pictures/03_14_edited.jpg")
im3_class = 14
im3_data = np.expand_dims(im3_data,axis=0)
im3_class = np.expand_dims(im3_class,axis=0)
#print(im3_data.shape)
im4_data = cv2.imread("./Internet_Pictures/04_27_edited.jpg")
im4_class = 27
im4_data = np.expand_dims(im4_data,axis=0)
im4_class = np.expand_dims(im4_class,axis=0)
#print(im4_data.shape)
im5_data = cv2.imread("./Internet_Pictures/05_17_edited.jpg")
im5_class = 17
im5_data = np.expand_dims(im5_data,axis=0)
im5_class = np.expand_dims(im5_class,axis=0)
#print(im5_data.shape)
online_images_data = np.concatenate((im1_data,im2_data,im3_data,im4_data,im5_data),axis=0)
online_images_classes = np.concatenate((im1_class,im2_class,im3_class,im4_class,im5_class),axis=0)
#print(online_images_data.shape)
#print(online_images_classes.shape)
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./checkpoints2'))
localX = online_images_data
localY = online_images_classes
#num_examples = len(localX)
#total_accuracy = 0
#sess = tf.get_default_session()
#batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
orderedResults = sess.run(logits, feed_dict={x: localX})
#print(orderedResults)
def findIndex(element, array):
for i in range(0,len(array)):
if element == array[i]:
return i
return -1
def findTopFive(elements,array):
response = []
#for i in range(0,elements.shape[0]):
#response.append(findIndex(elements[i],array))
for i in range(0,1):
response.append(findIndex(elements,array))
return response
for i in range (0,5):
#these two lines give the top 5 predicted classes by the network per image
currentImage = np.sort(orderedResults[i])[-1]
#print(currentImage)
#currentImage = firstImage[::-1]
first_indices = findTopFive(currentImage,orderedResults[i])
print("The " + str(i+1) + " image has a calculated classID of :")
print(first_indices)
#print(first_indices)
#print(firstImage)
#print(orderedResults)
#localAccuracy = sess.run(accuracy_operation, feed_dict={x: localX, y: localY})
#total_accuracy += (accuracy * len(batch_x))
#localResult = total_accuracy / num_examples
#print("Test Accuracy = {:.3f}".format(localAccuracy))
###Output
The 1 image has classID of :
[1]
The 2 image has classID of :
[3]
The 3 image has classID of :
[14]
The 4 image has classID of :
[1]
The 5 image has classID of :
[17]
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.with tf.Session() as sess:
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./checkpoints2'))
localX = online_images_data
localY = online_images_classes
#num_examples = len(localX)
#total_accuracy = 0
sess = tf.get_default_session()
#batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
#accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
localAccuracy = sess.run(accuracy_operation, feed_dict={x: localX, y: localY})
#total_accuracy += (accuracy * len(batch_x))
#localResult = total_accuracy / num_examples
print("Test Accuracy = {:.3f}".format(localAccuracy))
###Output
Test Accuracy = 0.600
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tk.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('./checkpoints2'))
localX = online_images_data
localY = online_images_classes
orderedResults = sess.run(logits, feed_dict={x: localX})
#print(orderedResults)
#orderedResults is our results from the network before softmaxing
#print(orderedResults)
def findIndex(element, array):
for i in range(0,len(array)):
if element == array[i]:
return i
return -1
def findTopFive(elements,array):
response = []
#for i in range(0,elements.shape[0]):
#response.append(findIndex(elements[i],array))
for i in range(0,5):
response.append(findIndex(elements[i],array))
return response
for i in range (0,5):
#these two lines give the top 5 predicted classes by the network per image
currentImage_softmax = sess.run(tf.nn.softmax(orderedResults[i]))
#print(currentImage_softmax.shape)
currentImage_top5 = np.sort(currentImage_softmax)[-5:]
#print(currentImage_softmax)
first_five = findTopFive(currentImage_top5,currentImage_softmax)
#currentImage = firstImage[::-1]
#first_indices = findTopFive(currentImage,orderedResults[i])
print("The " + str(i+1) + " image has classID of (from 5th most likely to most likely:")
print(first_five)
print("and these classID calculations have softmax values of")
print(currentImage_top5)
###Output
The 1 image has classID of (from 5th most likely to most likely:
[4, 2, 6, 5, 1]
and these classID calculations have softmax values of
[ 1.22350932e-26 3.00577236e-26 4.26716360e-25 5.34040696e-20
1.00000000e+00]
The 2 image has classID of (from 5th most likely to most likely:
[1, 13, 2, 7, 3]
and these classID calculations have softmax values of
[ 6.50436363e-08 1.87898294e-07 6.85490193e-07 1.28690782e-03
9.98712182e-01]
The 3 image has classID of (from 5th most likely to most likely:
[33, 5, 35, 38, 14]
and these classID calculations have softmax values of
[ 5.33221908e-07 8.56112001e-06 4.51639993e-04 1.12006981e-02
9.88338470e-01]
The 4 image has classID of (from 5th most likely to most likely:
[40, 25, 11, 20, 1]
and these classID calculations have softmax values of
[ 2.49249599e-08 1.74288687e-07 4.26926505e-07 1.40747161e-05
9.99985337e-01]
The 5 image has classID of (from 5th most likely to most likely:
[39, 10, 20, 37, 17]
and these classID calculations have softmax values of
[ 2.23625263e-12 1.96007620e-11 2.61837808e-11 1.53641555e-09
1.00000000e+00]
###Markdown
--- Step 4: Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Question 9Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images **Answer:**
###Code
No answer
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "./train.p"
validation_file= "./valid.p"
testing_file = "./test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
print("X_train shape:", X_train.shape)
print("y_train shape:", y_train.shape)
print("X_valid shape:", X_valid.shape)
print("y_valid shape:", y_valid.shape)
print("X_test shape:", X_test.shape)
print("y_test shape:", y_test.shape)
###Output
X_train shape: (34799, 32, 32, 3)
y_train shape: (34799,)
X_valid shape: (4410, 32, 32, 3)
y_valid shape: (4410,)
X_test shape: (12630, 32, 32, 3)
y_test shape: (12630,)
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
def plot_histogram(data, name):
class_list = range(n_classes)
label_list = data.tolist()
counts = [label_list.count(i) for i in class_list]
plt.bar(class_list, counts)
plt.xlabel(name)
plt.show()
### Data exploration visualization code goes here.
import matplotlib.pyplot as plt
import random
from random import shuffle
# Visualizations will be shown in the notebook.
%matplotlib inline
fig, axs = plt.subplots(4,5, figsize=(10,5))
fig.subplots_adjust(hspace = .5, wspace = .001)
axs = axs.ravel()
for i in range(20):
index = random.randint(0, len(X_train))
axs[i].axis('off')
axs[i].imshow(X_train[index])
axs[i].set_title(y_train[index])
def plot_histogram(data, name, color):
hist, bins = np.histogram(data, bins=n_classes)
plt.bar(range(n_classes), hist, width= 0.5, color= color)
plt.xlabel(name)
plt.show()
plot_histogram(y_train,"Training set: # of data points per class",'red')
plot_histogram(y_valid,"Validation set: # of data point per class",'green')
plot_histogram(y_test,"Testing set: # of data point per class",'blue')
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
# Convert to grayscale
X_train_rgb = X_train
X_train_gry = np.sum(X_train/3, axis=3, keepdims=True)
X_valid_rgb = X_valid
X_valid_gry = np.sum(X_valid/3, axis=3, keepdims=True)
X_test_rgb = X_test
X_test_gry = np.sum(X_test/3, axis=3, keepdims=True)
print('RGB shape:', X_train_rgb.shape)
print('Grayscale shape:', X_train_gry.shape)
fig, axs = plt.subplots(1,2, figsize=(10, 3))
axs = axs.ravel()
axs[0].axis('off')
axs[0].set_title('RGB')
axs[0].imshow(X_train_rgb[500].squeeze())
axs[1].axis('off')
axs[1].set_title('original')
axs[1].imshow(X_train_gry[500].squeeze(), cmap='gray')
# Normalize the grayscaled train, validation and test datasets to (-1,1)
X_train_gry_normalized = (X_train_gry - np.mean(X_train_gry)) / (np.std(X_train_gry))
X_valid_gry_normalized = (X_valid_gry - np.mean(X_valid_gry)) / (np.std(X_valid_gry))
X_test_gry_normalized = (X_test_gry - np.mean(X_test_gry)) / (np.std(X_test_gry))
print(np.floor(np.mean(X_train_gry_normalized)))
print(np.floor(np.mean(X_valid_gry_normalized)))
print(np.floor(np.mean(X_test_gry_normalized)))
print("Original shape:", X_train_gry.shape)
print("Normalized shape:", X_train_gry_normalized.shape)
fig, axs = plt.subplots(1,2, figsize=(10, 3))
axs = axs.ravel()
axs[0].axis('off')
axs[0].set_title('normalized')
axs[0].imshow(X_train_gry_normalized[500].squeeze(), cmap='gray')
axs[1].axis('off')
axs[1].set_title('original')
axs[1].imshow(X_train_gry[500].squeeze(), cmap='gray')
## Shuffle the training dataset
from sklearn.utils import shuffle
X_train_gry_normalized, y_train = shuffle(X_train_gry_normalized, y_train)
print('complete')
###Output
complete
###Markdown
Model Architecture LeNet 5 Architecture ![image.png](attachment:image.png)
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
EPOCHS = 60
BATCH_SIZE = 100
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
W1 = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
x = tf.nn.conv2d(x, W1, strides=[1, 1, 1, 1], padding='VALID')
b1 = tf.Variable(tf.zeros(6))
x = tf.nn.bias_add(x, b1)
print("layer 1 shape:",x.get_shape())
# TODO: Activation.
x = tf.nn.relu(x)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
x = tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
W2 = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
x = tf.nn.conv2d(x, W2, strides=[1, 1, 1, 1], padding='VALID')
b2 = tf.Variable(tf.zeros(16))
x = tf.nn.bias_add(x, b2)
# TODO: Activation.
x = tf.nn.relu(x)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
x = tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
x = flatten(x)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
W3 = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
b3 = tf.Variable(tf.zeros(120))
x = tf.add(tf.matmul(x, W3), b3)
# TODO: Activation.
x = tf.nn.relu(x)
# Dropout
x = tf.nn.dropout(x, keep_prob)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
W4 = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
b4 = tf.Variable(tf.zeros(84))
x = tf.add(tf.matmul(x, W4), b4)
# TODO: Activation.
x = tf.nn.relu(x)
# Dropout
x = tf.nn.dropout(x, keep_prob)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 43.
W5 = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
b5 = tf.Variable(tf.zeros(43))
logits = tf.add(tf.matmul(x, W5), b5)
return logits
print('complete')
###Output
complete
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
tf.reset_default_graph()
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32) # probability to keep units
one_hot_y = tf.one_hot(y, 43)
print('done')
rate = 0.0009
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, len(X_data), BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / len(X_data)
print('done')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print("Training...")
print()
for i in range(EPOCHS):
X_train_gry_normalized, y_train = shuffle(X_train_gry_normalized, y_train)
for offset in range(0, len(X_train_gry_normalized), BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_gry_normalized[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5})
validation_accuracy = evaluate(X_valid_gry_normalized, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet')
print("Model saved")
###Output
done
layer 1 shape: (?, 28, 28, 6)
done
Training...
EPOCH 1 ...
Validation Accuracy = 0.680
EPOCH 2 ...
Validation Accuracy = 0.824
EPOCH 3 ...
Validation Accuracy = 0.870
EPOCH 4 ...
Validation Accuracy = 0.886
EPOCH 5 ...
Validation Accuracy = 0.911
EPOCH 6 ...
Validation Accuracy = 0.921
EPOCH 7 ...
Validation Accuracy = 0.925
EPOCH 8 ...
Validation Accuracy = 0.940
EPOCH 9 ...
Validation Accuracy = 0.935
EPOCH 10 ...
Validation Accuracy = 0.939
EPOCH 11 ...
Validation Accuracy = 0.941
EPOCH 12 ...
Validation Accuracy = 0.942
EPOCH 13 ...
Validation Accuracy = 0.946
EPOCH 14 ...
Validation Accuracy = 0.946
EPOCH 15 ...
Validation Accuracy = 0.950
EPOCH 16 ...
Validation Accuracy = 0.954
EPOCH 17 ...
Validation Accuracy = 0.958
EPOCH 18 ...
Validation Accuracy = 0.945
EPOCH 19 ...
Validation Accuracy = 0.953
EPOCH 20 ...
Validation Accuracy = 0.952
EPOCH 21 ...
Validation Accuracy = 0.962
EPOCH 22 ...
Validation Accuracy = 0.943
EPOCH 23 ...
Validation Accuracy = 0.955
EPOCH 24 ...
Validation Accuracy = 0.953
EPOCH 25 ...
Validation Accuracy = 0.954
EPOCH 26 ...
Validation Accuracy = 0.965
EPOCH 27 ...
Validation Accuracy = 0.959
EPOCH 28 ...
Validation Accuracy = 0.960
EPOCH 29 ...
Validation Accuracy = 0.954
EPOCH 30 ...
Validation Accuracy = 0.951
EPOCH 31 ...
Validation Accuracy = 0.958
EPOCH 32 ...
Validation Accuracy = 0.960
EPOCH 33 ...
Validation Accuracy = 0.953
EPOCH 34 ...
Validation Accuracy = 0.956
EPOCH 35 ...
Validation Accuracy = 0.965
EPOCH 36 ...
Validation Accuracy = 0.964
EPOCH 37 ...
Validation Accuracy = 0.963
EPOCH 38 ...
Validation Accuracy = 0.963
EPOCH 39 ...
Validation Accuracy = 0.962
EPOCH 40 ...
Validation Accuracy = 0.959
EPOCH 41 ...
Validation Accuracy = 0.965
EPOCH 42 ...
Validation Accuracy = 0.967
EPOCH 43 ...
Validation Accuracy = 0.955
EPOCH 44 ...
Validation Accuracy = 0.969
EPOCH 45 ...
Validation Accuracy = 0.972
EPOCH 46 ...
Validation Accuracy = 0.965
EPOCH 47 ...
Validation Accuracy = 0.966
EPOCH 48 ...
Validation Accuracy = 0.965
EPOCH 49 ...
Validation Accuracy = 0.964
EPOCH 50 ...
Validation Accuracy = 0.969
EPOCH 51 ...
Validation Accuracy = 0.963
EPOCH 52 ...
Validation Accuracy = 0.967
EPOCH 53 ...
Validation Accuracy = 0.967
EPOCH 54 ...
Validation Accuracy = 0.966
EPOCH 55 ...
Validation Accuracy = 0.968
EPOCH 56 ...
Validation Accuracy = 0.969
EPOCH 57 ...
Validation Accuracy = 0.964
EPOCH 58 ...
Validation Accuracy = 0.965
EPOCH 59 ...
Validation Accuracy = 0.965
EPOCH 60 ...
Validation Accuracy = 0.964
Model saved
###Markdown
Log10/06/17 - 96.4% - model: LaNet - batch size: 150, epochs: 60, rate: 0.0009, mu: 0, sigma: 0.110/06/17 - 91.4% - model: LaNet - batch size: 150, epochs: 60, rate: 0.0001, mu: 0, sigma: 0.110/06/17 - 92.7% - model: LaNet - batch size: 160, epochs: 100, rate: 0.0001, mu: 0, sigma: 0.110/06/17 - 96.7% - model: LaNet - batch size: 160, epochs: 100, rate: 0.0005, mu: 0, sigma: 0.110/06/17 - 95.6% - model: LaNet - batch size: 100, epochs: 50, rate: 0.0005, mu: 0, sigma: 0.110/06/17 - 96.6% - model: LaNet - batch size: 100, epochs: 50, rate: 0.0007, mu: 0, sigma: 0.110/06/17 - 95.7% - model: LaNet - batch size: 100, epochs: 50, rate: 0.001, mu: 0, sigma: 0.110/06/17 - 95.5% - model: LaNet - batch size: 160, epochs: 50, rate: 0.0007, mu: 0, sigma: 0.110/06/17 - 95.0% - model: LaNet - batch size: 120, epochs: 50, rate: 0.0007, mu: 0, sigma: 0.110/06/17 - 97.1% - model: LaNet - batch size: 120, epochs: 100, rate: 0.0007, mu: 0, sigma: 0.1 10/11/17 - 97.3% - model: LaNet - batch size: 120, epochs: 100, rate: 0.0007, mu: 0, sigma: 0.1
###Code
# Eevaluate the accuracy of the model on the test dataset
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver2 = tf.train.import_meta_graph('./lenet.meta')
saver2.restore(sess, "./lenet")
test_accuracy = evaluate(X_test_gry_normalized, y_test)
print("Test Set Accuracy = {:.3f}".format(test_accuracy))
###Output
Test Set Accuracy = 0.950
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
#reading in an image
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import numpy as np
import cv2
import glob
import matplotlib.image as mpimg
from scipy.misc import imread, imsave, imresize
fig, axs = plt.subplots(1,5, figsize=(20, 20))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
my_images = []
for i, img in enumerate(glob.glob('./my_traffic_signs/*.jpg')):
image = cv2.imread(img)
axs[i].axis('off')
axs[i].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
image = imresize(image, (32,32))
my_images.append(image)
my_images = np.asarray(my_images)
my_images_gry = np.sum(my_images/3, axis=3, keepdims=True)
my_images_normalized = (my_images_gry - 128)/128
print(my_images_normalized.shape)
###Output
(5, 32, 32, 1)
###Markdown
Predict the Sign Type for Each Image and analyze Performance
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
my_labels = [4, 38, 16, 17, 25]
# my_labels = [0, 1, 12, 13, 14, 17, 18, 3, 36, 40]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver3 = tf.train.import_meta_graph('./lenet.meta')
saver3.restore(sess, "./lenet")
OUT = sess.run(tf.argmax(logits, 1), feed_dict={x: my_images_normalized, y: my_labels, keep_prob: 1.0})
print("", OUT , "<-predictions")
print("", my_labels, "<-actual")
### Calculate the accuracy for these 5 new images.
my_accuracy = evaluate(my_images_normalized, my_labels)
print("Test Set Accuracy = {:.3f}".format(my_accuracy))
###Output
[20 12 25 17 35] <-predictions
[4, 38, 16, 17, 25] <-actual
Test Set Accuracy = 0.200
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=3)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph('./lenet.meta')
saver.restore(sess, "./lenet")
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: my_images_normalized, keep_prob: 1.0})
my_top_k = sess.run(top_k, feed_dict={x: my_images_normalized, keep_prob: 1.0})
fig, axs = plt.subplots(len(my_images),4, figsize=(12, 14))
fig.subplots_adjust(hspace = .4, wspace=.2)
axs = axs.ravel()
for i, image in enumerate(my_images):
axs[4*i].axis('off')
axs[4*i].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
axs[4*i].set_title('input')
guess1 = my_top_k[1][i][0]
index1 = np.argwhere(y_valid == guess1)[0]
axs[4*i+1].axis('off')
axs[4*i+1].imshow(X_valid_gry_normalized[index1].squeeze(), cmap='gray')
axs[4*i+1].set_title('top guess: {} ({:.0f}%)'.format(guess1, 100*my_top_k[0][i][0]))
guess2 = my_top_k[1][i][1]
index2 = np.argwhere(y_valid == guess2)[0]
axs[4*i+2].axis('off')
axs[4*i+2].imshow(X_valid_gry_normalized[index2].squeeze(), cmap='gray')
axs[4*i+2].set_title('2nd guess: {} ({:.0f}%)'.format(guess2, 100*my_top_k[0][i][1]))
guess3 = my_top_k[1][i][2]
index3 = np.argwhere(y_valid == guess3)[0]
axs[4*i+3].axis('off')
axs[4*i+3].imshow(X_valid_gry_normalized[index3].squeeze(), cmap='gray')
axs[4*i+3].set_title('3rd guess: {} ({:.0f}%)'.format(guess3, 100*my_top_k[0][i][2]))
###Output
_____no_output_____
###Markdown
Project WriteupWriteup written in the README file.Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'dataset/train.p'
validation_file='dataset/valid.p'
testing_file = 'dataset/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import pandas as pd
import numpy as np
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
# TODO: How many unique classes/labels there are in the dataset.
df = pd.read_csv('signnames.csv')
n_classes =len( df.index)
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
from collections import Counter
# Visualizations will be shown in the notebook.
%matplotlib inline
counter = Counter(y_train)
key,data = zip(*sorted(counter.most_common()))
plt.figure(figsize=(12, 4))
plt.bar(key, data, alpha=0.6)
plt.title("training dataset distribution")
plt.xlabel("Classes")
plt.ylabel("Image Number")
plt.show()
plt.figure(figsize=(12, 16.5))
for i in range(0, n_classes):
plt.subplot(11, 4, i+1)
filtered = X_train[y_train == i] # all the training data according to class values.
plt.imshow(filtered[0,: ]) #plot first image of the class.
plt.title(df.values[i][1])
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from keras.preprocessing.image import ImageDataGenerator
from numpy import expand_dims
from keras.preprocessing.image import img_to_array
###Output
Using TensorFlow backend.
###Markdown
I use keras data generate methods to generate new images on existing ones. According to documentation It can be generate different as below. Random generation will be used for. ```'theta': Float. Rotation angle in degrees.'tx': Float. Shift in the x direction.'ty': Float. Shift in the y direction.'shear': Float. Shear angle in degrees.'zx': Float. Zoom in the x direction.'zy': Float. Zoom in the y direction.'flip_horizontal': Boolean. Horizontal flip.'flip_vertical': Boolean. Vertical flip.'channel_shift_intencity': Float. Channel shift intensity.'brightness': Float. Brightness shift intensity.```
###Code
from sklearn.utils import shuffle
X_train , y_train = shuffle(X_train, y_train )
X_test , y_test = shuffle(X_test, y_test )
X_valid , y_valid = shuffle(X_valid, y_valid )
def preprocess(data):
# Normalization ([0, 255] => [-1, 1))
return (data - 128.0) / 128.0
def restore_image(data):
return data * 128.0 + 128.0
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
###Output
_____no_output_____
###Markdown
Generating images for increassing training size.
###Code
plt.figure(figsize=(12, 16.5))
for i in range(0, n_classes):
plt.subplot(11, 4, i+1)
x = X_train[y_train == i][0,:] # all the training data according to class values.
plt.imshow(datagen.random_transform(x, seed=1)) #plot first image of the class.
plt.title(df.values[i][1])
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
For always obtainin same result we need to save the generated database to a pickle file. X_train = preprocess(X_train)X_test = preprocess(X_test)X_valid = preprocess(X_valid)
###Code
X_train_ex = []
y_train_ex = []
def create_data(X_train_ex, y_train_ex, n=3):
for i in range(0, X_train.shape[0]):
for j in range(0,n): # Generate n new random transfer images.
X_train_ex.append(datagen.random_transform(X_train[i,:], seed=1))
y_train_ex.append(y_train[i])
try:
with open("train_augmented.pkl", "rb") as f:
X_train_ex, y_train_ex = pickle.load(f)
X_train_ex = np.array(X_train_ex)
y_train_ex = np.array(y_train_ex)
except OSError as err:
print("train_augment")
create_data(X_train_ex,y_train_ex)
X_train_ex = np.concatenate( (X_train_ex, X_train))
y_train_ex = np.concatenate( (y_train_ex, y_train))
with open("train_augmented.pkl", 'wb') as f:
pickle.dump([X_train_ex,y_train_ex], file=f)
X_train_ex = preprocess(X_train_ex)
X_test = preprocess(X_test)
X_valid = preprocess(X_valid)
###Output
_____no_output_____
###Markdown
Model Architecture Architecture**Layer 1:** Convolutional. The output shape should be 28x28x6.**Activation** Your choice of activation function.Pooling. The output shape should be 14x14x6.Layer 2: Convolutional. The output shape should be 10x10x16.Activation. Your choice of activation function.Pooling. The output shape should be 5x5x16.Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.Layer 3: Fully Connected. This should have 120 outputs.Activation. Your choice of activation function.Layer 4: Fully Connected. This should have 84 outputs.Activation. Your choice of activation function.Layer 5: Fully Connected (Logits). This should have 10 outputs.
###Code
from tensorflow.contrib.layers import flatten
from tensorflow.contrib import layers
import tensorflow as tf
#global variables/.
EPOCHS = 30
BATCH_SIZE = 128
def TrafficSignNet(x, keep_prob, mu = 0, sigma = 0.1 ):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x12.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 12), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(12))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
l1 = conv1
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x12. Output = 14x14x12
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 12, 32), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(32))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
l2 = conv2
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x32. Output = 5x5x32.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 800. Output = 240.
fc1_W = tf.Variable(tf.truncated_normal(shape=(800, 240), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(240))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob=keep_prob)
# dropout is added to the network to see the performance.
# SOLUTION: Layer 4: Fully Connected. Input = 240. Output = 120.
fc2_W = tf.Variable(tf.truncated_normal(shape=(240, 120), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(120))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
fc2 = tf.nn.dropout(fc2, keep_prob=keep_prob)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(120, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits, l1, l2
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
y_onehot = tf.one_hot(y, n_classes)
keep_prob = tf.placeholder(tf.float32)
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
logits, conv1, conv2 = TrafficSignNet(x, keep_prob )
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y_onehot, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
softmax_op = tf.nn.softmax(logits=logits)
pred_count = 5
top5_op = tf.nn.top_k(softmax_op, k=pred_count)
# evaluations
learning_rate = 0.001
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_onehot, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
training_operation = optimizer.minimize(loss_operation)
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+ BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
###Output
_____no_output_____
###Markdown
Train the ModelTraining is both done on extended and original dataset. After each epoch the loss and accuracy of the validation set is measured. The model is saved after training for future usage. Extended version of the dataset is used for swowing result of generating augmented images. It can expand the capability of the network in term of scale, rotation etc.
###Code
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_ex)
print("Training...")
print()
for i in range(EPOCHS):
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_ex[offset:end], y_train_ex[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {}".format(validation_accuracy))
print()
saver.save(sess, './trafficSignNet_ex')
print("Model saved")
###Output
_____no_output_____
###Markdown
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.5}) validation_accuracy = evaluate(X_valid, y_valid) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {}".format(validation_accuracy)) print() saver.save(sess, './trafficSignNet') print("Model saved") Evaluate the ModelOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.Be sure to only do this once!If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.You do not need to modify this section. with tf.Session() as sess: saver.restore(sess, './trafficSignNet') test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy))
###Code
with tf.Session() as sess:
saver.restore(sess, './trafficSignNet_ex')
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
###Output
INFO:tensorflow:Restoring parameters from ./trafficSignNet_ex
###Markdown
INFO:tensorflow:Restoring parameters from ./trafficSignNet_exTest Accuracy = 0.953 --- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
import os
import matplotlib.pyplot as plt
import cv2
new_img_dir = 'sampleImages/'
image_files = sorted(os.listdir(new_img_dir))
new_img_count = len(image_files)
new_images = []
X_new, y_new = [], []
for image_name in image_files:
# Read an image file
img = plt.imread(new_img_dir + image_name)
print(image_name)
new_images.append(img)
# Resize the image file
img_resized = cv2.resize(img, dsize=(32, 32), interpolation=cv2.INTER_LINEAR)
print(img_resized.shape)
X_new.append(img_resized)
# Determine the traffic sign class
#img_class = int(image_name.split('.')[0])
#y_new.append(img_class)
# Preprocess images
y_new = np.array(y_new)
X_new = np.array(X_new)
plt.figure(figsize=(12, 16.5))
for i in range(0, X_new.shape[0]):
plt.subplot(11, 4, i+1)
plt.imshow(X_new[i]) #plot first image of the class.
#plt.title(df.values[i][1]) When I decide the title Ill use it.
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
X_new_proc = preprocess(X_new)
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, './trafficSignNet_ex')
top5_result = sess.run(top5_op, feed_dict={x: X_new, keep_prob: 1.0})
plt.figure(figsize=(12, 16.5))
for i in range(0 , top5_result[1].shape[0]):
plt.subplot(11, 4, i+1)
result = "\n".join(df.values[ top5_result[1][i]][:,1])
plt.title(result)
plt.imshow(X_new[i])
plt.axis('off')
plt.show()
#test_accuracy = evaluate(X_test, y_test)
#print("Test Accuracy = {:.3f}".format(test_accuracy))
top5_result
top5_result[1][0]
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
X_new[0:1].shape
#%% Visualize Convolution Layer Feature Maps
train_data_file = 'sampleImages/smaple'
plt.imshow(X_new[0].squeeze(), cmap="gray")
#plt.savefig('result/visualize_1.png')
plt.show()
with tf.Session() as sess:
saver.restore(sess, './trafficSignNet_ex')
outputFeatureMap(X_new[0:1], conv1, sess)
outputFeatureMap(X_new[0:1], conv2, sess)
plt.imshow(X_new[1].squeeze(), cmap="gray")
#plt.savefig('result/visualize_2.png')
plt.show()
with tf.Session() as sess:
saver.restore(sess, train_data_file)
outputFeatureMap(X_new[1:2], conv1, sess)
outputFeatureMap(X_new[1:2], conv2, sess)
###Output
_____no_output_____
###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
import numpy as np
# TODO: Fill this in based on where you saved the training and testing data
training_file = "traffic-signs-data/train.p"
validation_file= "traffic-signs-data/valid.p"
testing_file = "traffic-signs-data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of validation examples
n_validation = len(X_valid)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
import numpy as np
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0,len(X_train))
image = X_train[index].squeeze()
plt.imshow(image)
def im_plot1(data_set,n_rows,n_cols):
num = 1
array = np.random.randint(0,44,5)
sel_set = data_set[array[4]]
plt.subplot(n_rows,n_cols,num)
plt.imshow(sel_set)
# print(y_train[array[0]])
plt.title("{}".format(y_train[array[4]]))
print(array)
return None
im_plot1(X_train,1,1)
def im_plot(data_set,data_set2,n_rows,n_cols):
im_num = 1
plt.figure(figsize=(12, 8))
num_class = np.random.randint(0,44,n_rows)
for i in range(len(num_class)):
# print(data_set[data_set2 == 1][0])
# num_fig = random.sample(data_set[data_set2 == i],5)
# print(num_fig)
# print(num_class[i])
for j in range(0,n_cols):
plt.subplot(n_rows,n_cols,im_num)
plt.imshow(data_set[data_set2 == num_class[i]][j])
plt.title("{}".format(num_class[i]))
im_num += 1
return None
im_plot(X_train,y_train,4,6)
def data_distribute(data_set2):
num_class = np.bincount(data_set2)
plt.figure(figsize = (12,8))
print(num_class)
plt.bar(np.arange(0,43),num_class,0.8,color = "red")
plt.title("Distribution of classes")
plt.xlabel("class of numbers")
plt.ylabel("amount of each nunbers")
# plt.show()
return None
data_distribute(y_train)
###Output
[ 180 1980 2010 1260 1770 1650 360 1290 1260 1320 1800 1170 1890 1920 690
540 360 990 1080 180 300 270 330 450 240 1350 540 210 480 240
390 690 210 599 360 1080 330 180 1860 270 300 210 210]
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
import cv2
X_train,y_train = shuffle(X_train,y_train)
# def im_process(image):
# R = (image[:,:,0] - 128)/128
# G = (image[:,:,1] - 128) / 128
# B = (image[:,:,2] - 128) / 128
# combined_image = np.dstack((R,G,B))
# # Gray_image = cv2.cvtColor(image,cv.COLOR_RGB2GRAY)
# gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# return combined_image
def im_process(image_dataset):
im = np.sum(image_dataset/3,axis = 3,keepdims = True)
im = (im - 128) / 128
return im
def im_process2(image_dataset):
im = np.sum(image_dataset/3,axis = 3,keepdims = True)
im = (im - 128) / 128
return im
X_train_process = im_process(X_train)
plt.imshow(X_train_process[0].squeeze())
print(X_train[0].shape)
print(X_train.shape)
print(X_train_process.shape)
# im_plot(X_train_process,y_train,1,6)
def im_plot(data_set,data_set2,n_rows,n_cols):
im_num = 1
im_num2 = 1
plt.figure(figsize=(12, 8))
num_class = np.random.randint(0,44,n_rows)
for i in range(len(num_class)):
for j in range(0,n_cols):
plt.subplot(n_rows,n_cols,im_num)
plt.imshow(data_set[data_set2 == num_class[i]][j])
plt.title("{}".format(num_class[i]))
im_num += 1
# n_rows += 1
for m in range(0,n_cols):
plt.subplot(n_rows,n_cols,im_num2)
plt.imshow(im_process2(data_set[data_set2 == num_class[i]])[0].squeeze())
# print(data_set[data_set2 == num_class[i]].shape)
# print("*********")
# print(im_process2(data_set[data_set2 == num_class[i]]).shape)
im_num2 += 1
# n_rows -= 1
return None
im_plot(X_train,y_train,4,6)
def im_plot(data_set,data_set2,n_rows,n_cols):
im_num = 0
im_num2 = 1
plt.figure(figsize=(12, 8))
num_class = np.random.randint(0,44,n_rows)
fig,axs = plt.subplots(8,6,figsize = (12,8))
axs = axs.ravel()
for i in range(len(num_class)):
for j in range(0,n_cols):
# plt.subplot(n_rows,n_cols,im_num)
axs[im_num].imshow(data_set[data_set2 == num_class[i]][j])
# axs[im_num].title("{}".format(num_class[i]))
im_num += 1
# n_rows += 1
for m in range(0,n_cols):
# plt.subplot(n_rows,n_cols,im_num2)
axs[im_num].imshow(im_process2(data_set[data_set2 == num_class[i]])[m].squeeze())
# print(data_set[data_set2 == num_class[i]].shape)
# print("*********")
# print(im_process2(data_set[data_set2 == num_class[i]]).shape)
im_num += 1
# n_rows -= 1
return None
im_plot(X_train,y_train,4,6)
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
EPOCHS = 10
BATCH_SIZE = 128 #
dropout = 0.5
def LeNet(x):
mu = 0
sigma = 0.01
# Layer1:Input:32 * 32 * 1,Output:28 * 28 * 6
Layer1_w = tf.Variable(tf.random_normal(shape = [5,5,1,6],mean = mu, stddev = sigma))
Layer1_b = tf.Variable(tf.random_normal([6]))
stride_1 = [1,1,1,1]
conv1 = tf.nn.bias_add(tf.nn.conv2d(x,Layer1_w,stride_1,padding = "VALID"),Layer1_b)
#Activation
conv1 = tf.nn.relu(conv1)
#Pooling:Input:28*28*6,Output:14*14*6
conv1 = tf.nn.max_pool(conv1,ksize = [1,2,2,1],strides = [1,2,2,1],padding = "VALID")
#Layer2:Input:14*14*6,Output:10*10*16
Layer2_w = tf.Variable(tf.random_normal(shape =[5,5,6,16],mean = mu,stddev = sigma))
Layer2_b = tf.Variable(tf.random_normal([16]))
stride_2 = [1,1,1,1]
conv2 = tf.nn.bias_add(tf.nn.conv2d(conv1,Layer2_w,stride_2,padding = "VALID"),Layer2_b)
#Activation
conv2 = tf.nn.relu(conv2)
#Pooling:Input:10*10*16,Output:5*5*16
conv2 = tf.nn.max_pool(conv2,ksize = [1,2,2,1],strides =[1,2,2,1] ,padding = "VALID")
#Flatten
fc_0 = flatten(conv2)
#Layer3:Input:400,Output:120
Layer3_w = tf.Variable(tf.random_normal(shape =(400,120),mean = mu,stddev = sigma))
Layer3_b = tf.zeros(120)
fc_1 = tf.nn.bias_add(tf.matmul(fc_0,Layer3_w),Layer3_b)
#Activation
fc1 = tf.nn.relu(fc_1)
fc1 = tf.nn.dropout(fc1,dropout)
#Layer4:Input:120,Output:43
Layer4_w = tf.Variable(tf.random_normal(shape =(120,43),mean = mu, stddev = sigma))
Layer4_b = tf.Variable(tf.zeros(43))
fc_2 = tf.nn.bias_add(tf.matmul(fc1,Layer4_w),Layer4_b)
return fc_2
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
tf.reset_default_graph() ###本段代码是否放在这里,不太确定
x = tf.placeholder(tf.float32,(None,32,32,1))
y = tf.placeholder(tf.int32,(None))
one_hot_y = tf.one_hot(y,43)
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels = one_hot_y,logits = logits)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss)
predict = tf.equal(tf.argmax(logits,1),tf.argmax(one_hot_y,1))
accuracy_operation = tf.reduce_mean(tf.cast(predict,tf.float32))
saver = tf.train.Saver()
def evaluate(X_data,y_data):
num_examples = len(X_data)
accuracy = 0
sess = tf.get_default_session()
for offset in range(0,len(X_data),BATCH_SIZE):
batch_x,batch_y = X_data[offset:offset+BATCH_SIZE],y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation,feed_dict= {x:batch_x,y:batch_y})
accuracy += (accuracy*len(batch_x))
print("accuracy:",accuracy)
return accuracy / num_examples
X_train = im_process2(X_train)
X_valid = im_process2(X_valid)
X_test = im_process2(X_test)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train,y_train = shuffle(X_train,y_train)
for j in range(0,num,BATCH_SIZE):
batch_x,batch_y = X_train[j:j+BATCH_SIZE],y_train[j:j+BATCH_SIZE]
sess.run(training_operation,feed_dict = {x:batch_x,y:batch_y})
validation_accuracy = evaluate(X_valid,y_valid)
print("EPOCH{}...".format(i+1))
print("Validation Accuaracy = {:.3f}".format(validation_accuracy))
saver.save(sess,"../CarND-Traffic-Sign-Classifier-Project/LeNet")
with tf.Session() as sess:
saver.restore(sess,tf.train.latest_checkpoint("."))
test_accuracy = evaluate(X_test,y_test)
print("Test Accuracy = {:,.3f}".format(test_accuracy))
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import numpy as np
import cv2
import glob
import matplotlib.image as mpimg
import os
from scipy import misc
import glob
import matplotlib.image as mpimg
fig,axs = plt.subplots(2,4,figsize = (12,8))
axs = axs.ravel()
selected_images = []
for num,file in enumerate(glob.glob(os.path.join("Pictures","*.jpg"))):
# image = cv2.imread(file)
# axs[num].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) 使用cv2读取的话,则需要转换BGR为RGB
image2 = mpimg.imread(file)
axs[num].imshow(image2)
selected_images.append(image2)
selected_image_set = np.asarray(selected_images)
selected_image_set_process = im_process2(selected_image_set)
selected_image_set.shape
selected_image_set_process.shape
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
selected_images_labels = [12, 15, 17, 1, 23, 25, 28, 30]
with tf.Session as sess:
saver.restore(sess,tf.train.latest_checkpoint("."))
sess.run(logits,feed_dict = {x:selected_image_set_process})
print(logits) #3-7行根本不可能输出概率
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session as sess:
saver.restore(sess,tf.train.latest_checkpoint("."))
accuracy = evaluate(selected_image_set_process,selected_images_labels)
print(accuracy)
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
soft_value = tf.nn.softmax(logits)
top_k_value = tf.nn.top_k(soft_value,k = 5)
with tf.Session as sess:
saver.restore(sess,tf.train.latest_checkpoint("."))
sess.run(top_k_value,feed_dict = {x:selected_image_set_process})
fig,axs = plt.subplots(8,5,figsize = (12,8))
axs = axs.ravel()
for num,im in enumerate(selected_images):
axs[num].imshow(im)
axs[num+1].imshow(X_train[np.argwhere(y_train == top_k_value[1][num][0])][0].squeeze())
axs[num+1].title("{}".format(top_k_value[1][num][0]))
axs[num+2].imshow(X_train[np.argwhere(y_train == top_k_value[1][num][1])][0].squeeze())
axs[num+2].title("{}".format(top_k_value[1][num][1]))
axs[num+3].imshow(X_train[np.argwhere(y_train == top_k_value[1][num][2])][0].squeeze())
axs[num+3].title("{}".format(top_k_value[1][num][2]))
axs[num+4].imshow(X_train[np.argwhere(y_train == top_k_value[1][num][3])][0].squeeze())
axs[num+4].title("{}".format(top_k_value[1][num][2]))
axs[num+5].imshow(X_train[np.argwhere(y_train == top_k_value[1][num][4])][0].squeeze())
axs[num+4].title("{}".format(top_k_value[1][num][2]))
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____ |
content/post/drafts/viral-bayes/index.ipynb | ###Markdown
---title: "Decision by sampling: Monte Carlo approaches to dealing with uncertainty"summary: "This is how I'm dealing with my anxiety"date: 2020-03-27source: jupyter--- Dealing with uncertainty is hard.Algebra is also hard.This post is a gentle introduction to a techniquethat lets you do the former,without having to do too much of the latter.To do this, we'll use two useful concepts from computer science and statistics:**Monte Carlo sampling**, and **Bayesian probability distributions**.> Note:> I wrote the first draft of this post early in the COVID lockdown,> finished it 6 weeks in. There will be coronavirus-related examples,> and the writing may be a little unhinged.> Be warned.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
from matplotlib import rcParams
import seaborn as sns
sns.set_style('whitegrid')
rcParams['figure.figsize'] = (6, 4)
rcParams['font.size'] = 18
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
from ipywidgets import interact, FloatSlider
def fs(value, min, max, step, description=None):
'''Shorthand for ipywidgets.FloatSlider'''
return FloatSlider(value=value, min=min, max=max, step=step,
description=description, continuous_update=False)
###Output
_____no_output_____
###Markdown
Monte Carlo SamplingIn *Monte Carlo* approaches, we use random simulations to answer questions that might otherwise require some difficult equations.Confusingly, they're also known in some fields as *numerical* approaches, and are contrasted with *analytic* approaches,where you just work out the correct equation.[Wikipedia tells us](https://en.wikipedia.org/wiki/Monte_Carlo_methodHistory) that,yes, Monte Carlo methods are named after the casino.I won't be talking in this post about [Markov Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo),a particular Monte Carlo method that comes up a lot in Bayesian statistics. Here's a simple Monte Carlo example.Let's say you want to know the area of a circlewith a radius of $r$.We'll use a unit circle, $r=1$, in this example.
###Code
def circle_plot():
fig, ax = plt.subplots(figsize=(5, 5))
plt.hlines(1, -1, 1)
plt.vlines(1, -1, 1)
plt.scatter(0, 0, marker='+', color='k')
plt.xlim(-1, 1)
plt.ylim(-1, 1)
circle = plt.Circle((0, 0), 1, facecolor='None', edgecolor='r')
ax.add_artist(circle)
return fig, ax
circle_plot();
###Output
_____no_output_____
###Markdown
Analytically, you know that the answer is $\pi r^2$. What if we didn't know this equation?The Monte Carlo solution is as follows.We know that the area of the bounding square is $2r \times 2r = 4r^2$We need to figure out what proportion of this square is taken up by the circle.To find out, we randomly select a large number of points in the square,and check if they're within $r$ of the center point $[0, 0]$.
###Code
n = 1000 # Number of points to simulate
x = np.random.uniform(low=-1, high=1, size=n)
y = np.random.uniform(low=-1, high=1, size=n)
# Distance from center (Pythagoras)
dist_from_origin = np.sqrt(x**2 + y**2)
# Check is distance is less than radius
is_in_circle = dist_from_origin < 1
# Plot results
circle_plot()
plt.scatter(x[is_in_circle], y[is_in_circle], color='b', s=2) # Points in circle
plt.scatter(x[~is_in_circle], y[~is_in_circle], color='k', s=2); # Points outside circle
m = is_in_circle.mean()
print('%.4f of points are in the circle' % m)
###Output
_____no_output_____
###Markdown
Since the area of the square is $4r^2$,and the circle takes up ~$0.78$ of the square,the area of the circle is roughly $0.78 \times 4r^2 = 3.14r^2$.We've discovered $\pi$. Bayesian Probability DistributionsThe term **Bayesian statistics** refers to a whole family of approach to statistical inference.What is common to all of these approaches is that they take probabilities to be statements about *subjective beliefs*.This means a Bayesian doctor can say things like "*I'm 90% sure this patient has COVID-19*",while a non-Bayesian doctor could only say something like "*for every 100 patients with these symptoms, 90 will have it*".If this differences doesn't make much sense to you,fear not, because a) you're not alone, and b) it doesn't matter for this post.The Bayesian approach gives us a useful way of thinking about uncertainty.If we're unsure about some number,we can replace it with a [*probability mass function*](https://en.wikipedia.org/wiki/Probability_mass_function)(if the number is discrete, like a count)or a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) (if the number is continuous, like a measurement)over possible values.For example, let's say you don't know how tall I am.I'm an adult Irish male,and we know that the heights of Irish men in general are normally distributed,with a mean of 70 and a standard deviation of 4 inches.This means you can use the distribution $Normal(70, 4)$,where $Normal(\mu, \sigma)$ standards for the Normal probability distribution function with mean $\mu$ and standard deviation $\sigma$.
###Code
heights_to_plot = np.arange(50, 90, .1)
pdf = stats.norm.pdf(heights_to_plot, 70, 4)
plt.plot(heights_to_plot, pdf)
plt.fill_between(heights_to_plot, pdf, 0, alpha=.1)
plt.ylim(0, .11)
plt.xlabel('Possible Heights (inches)')
plt.ylabel('Belief that Eoin\nmight be this tall')
###Output
_____no_output_____
###Markdown
$Normal(\mu, \sigma)$is shorthand for a relatively complicated function$$Normal(\mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} e^{\frac{1}{2}(\frac{x - \mu}{\sigma})^2}$$where $x$ is each particular height.I said at the outset that we wouldn't do very much maths.Luckily, you don't have to actual use this function.Instead, we can use our computers to quickly and easily**sample** a large number of values from this distribution,and use these samples as an estimate of my height.
###Code
N = 10000
height_samples = np.random.normal(70, 4, N)
height_samples
plt.hist(height_samples, bins=np.arange(50, 90, 1))
plt.xlabel('Height (Inches)')
plt.ylabel('Samples where\nEoin is this tall');
###Output
_____no_output_____
###Markdown
Now that we have these samples, we can use them to answer questions.For instance, what's the probability that I'm taller than 6 foot (72 inches)?To find out, you need to find how much of the distribution is above this value.This is easy with the samples: it's the proportion of samples greater than this value.
###Code
print('P(Eoin > 72 inches) = %.2f' % np.mean(height_samples > 72))
###Output
_____no_output_____
###Markdown
One more:what's the probability that I'm taller than 5 foot 6, but less than 6 foot?
###Code
p = np.mean((height_samples > 66) & (height_samples < 72))
print('P(66 < Eoin < 72) = %.2f' % p)
###Output
_____no_output_____
###Markdown
So far, this isn't very exciting.You could obtain the same answers by just checking what proportion of Irish men are more than 6 foot tall, without using impressive terms like "Bayesian".This approaches comes into its own iswhen we must combine multiple sources of information.In Bayesian statistical modelling,this usually means combining our prior beliefs (like your beliefs about how tall I'm likely to be)with multiple data points.Here, we're going to look at a simpler example:making predictions based on uncertain knowledge.To do it, we're going to have to start talking about COVID-19. Sorry. Incubation PeriodsA few weeks ago, when I wrote the first draft of this post,the whole world was talking about incubation periods,specifically that it took up to two weeks for COVID symptoms to develop after contact with a carrier.This prompted some confusing infographics, like the one below from the BBC. Where does the 14 days figure come from?[Lauer et al (2020)](https://annals.org/aim/fullarticle/2762808/incubation-period-coronavirus-disease-2019-covid-19-from-publicly-reported) analysed incubation times from 181 cases, and concluded that the times followed a Log-Normal distributionwith $\mu = 1.621$ and $\sigma = 0.418$.Note that these parameters are the mean and standard deviationof the log of the incubation time distribution,rather than the mean and standard deviation of the incubation times themselves.$$\alpha \sim \text{Log-Normal}(1.621, 0.418)$$They also posted their data and code to [this GitHub repository](https://github.com/HopkinsIDD/ncov_incubation).As we've seen above, since we don't know in advancethe exact incubation time in individual cases,we can use simulated samples from this distribution instead.
###Code
def histogram(x, label=None, bins=None, density=True):
'''We'll be doing lots of histograms, so here's a
funciton that makes them easier'''
plt.hist(x, bins=bins, density=density);
if label:
plt.xlabel(label)
if not density:
plt.yticks([])
plt.gca().axes.spines['left'].set_visible(False)
else:
plt.ylabel('Probability')
n = 10000
incubation_mu = 1.621
incubation_sigma = 0.418
incubation_times = np.random.lognormal(incubation_mu, incubation_sigma, n)
## We could also use
# incubation_times = stats.lognorm(loc=0, scale=np.exp(incubation_mu), s=incubation_sigma).rvs(n)
histogram(incubation_times, u'Incubation time in days', bins=range(20))
###Output
_____no_output_____
###Markdown
Note that we've set `density=True`, so instead of showing the number of simulated values in each bin,the histogram shows the *proportion* of vales in each bin. Have you caught it?We can already use these samples to answer some questions.Let's say you're in full quarantine.Just before going in,you had to be somewhere risky,where there was a probability $\alpha$ you picked up the virus.We'll start by assuming there was a fifty-fiftyof picking up the virus, $\alpha = .5$.Let's also assume for now (wrongly) that if you have the virus,you're guaranteed to develop symptoms eventually.After $d$ days in quarantine, you still haven't developed any symptoms.What is the probability that you've picked up the virus?We can work this out analytically,and we do below,but it's much easier to just run simulations here.To do this, we run $N$ simulations of the scenario described above.The rules of the simulations are as follows.- In each simulation, the probability of catching the virus is $\alpha$, and the probability of not catching it is $1 - \alpha$. This is known as a [*Bernoulli distribution*](https://en.wikipedia.org/wiki/Bernoulli_distribution), with probability $\alpha$.- In simulations where you do catch it, the length of your incubation period $\tau$ is sampled randomly from the distribution $\text{Log-Normal}(1.621, 0.418)$- If you do have the virus, and $d \gt \tau$, you have symptoms. Otherwise, you don't.We want to estimate $P(\text{Has virus} | \text{No symptoms by day }d)$.To do so, we check on how many of the simulations where no symptoms developed by day $d$do you in fact have the virus.Here's the code after 5 days, $d = 5$.
###Code
# Set our parameters
N = 10000 # Number of simulations
alpha = .5 # P(picked up virus)
d = 5 # Days in quarantine.
# Simulations where you're infected
# (1 if infected, 0 otherwise)
is_infected = np.random.binomial(1, alpha, n)
# Incubation times for people who are infected
incubation_times = np.random.lognormal(incubation_mu, incubation_sigma, n)
# Simulations where you have symptoms by day d.
symptoms_today = np.where(is_infected & (incubation_times < d), 1, 0)
# In how many of the simulations where you show no symptoms today
# do you turn out to actually be infected?
p_infected = is_infected[symptoms_today == 0].mean()
print('P(Infected) = %.2f' % p_infected)
###Output
_____no_output_____
###Markdown
Next, let's calculate it over a range of days(😊 = No symptoms).
###Code
# Ineffecient code
days = np.arange(0, 20)
probabilities = []
for d in days:
symptoms_today = np.where(is_infected & (incubation_times < d), 1, 0)
p_infected = is_infected[symptoms_today==0].mean()
probabilities.append(p_infected)
plt.plot(days, probabilities)
plt.ylim(0, 1)
plt.xlim(0, 20)
plt.xlabel('Days in Quarantine')
plt.ylabel('P(Infected | 😊)')
plt.show()
###Output
_____no_output_____
###Markdown
Finally, just to show off, let's make our code more efficient, wrap it in a function,and then wrap that in an interactive widget.
###Code
def plot_p_infected(alpha: float,
incubation_mu: float=1.6, incubation_sigma: float=0.4):
'''Plot posterior probability that you're infected if you
remain symptom free over a number of days since contact.
Arguments:
- prior_p_infected: Prior probability of infection.
- prob_symptoms_if_infected: Probability of symptoms if infected
- incubation_mu: Log-mean parameter for incubution time distribution (default = 1.6)
- incubation_sigma: Log-SD parameter for incubution time distribution (default = 0.4)
Returns nothing, but plots a figure.
'''
n = 1000
days = range(0, 20)
is_infected = np.random.binomial(1, alpha, n)
incubation_times = np.random.lognormal(incubation_mu, incubation_sigma, n)
def get_p_for_day(d):
'''Calculate P(Infected) after d days'''
symptoms_today = np.where(is_infected & (incubation_times < d), 1, 0)
return is_infected[symptoms_today==0].mean()
probabilities = [get_p_for_day(d) for d in days]
plt.plot(days, probabilities)
plt.ylim(0, 1)
plt.xlabel('Days since contact')
plt.ylabel('P(Infected | 😊)')
plt.show()
interact(plot_p_infected,
alpha = fs(.5, 0, 1, .1, 'α'),
incubation_mu = fs(1.6, 0, 5, .1, 'Incubation μ'),
incubation_sigma = fs(0.4, 0, 1, .1, 'Incubation σ'));
###Output
_____no_output_____
###Markdown
Room for ImprovementThere are quite a few assumptions in this analysis that aren't right.For example, it assumes:- Everyone who has the virus will develop symptoms.- Symptoms are obvious. In reality, knowing whether or not you have the symptoms is a signal detection problem, and you need to include that uncertainty in your analysis.- There is no other possible cause of COVID-like symptoms. Analytic Solution (Optional)If you want to see the analytic solution to this problem, [check this endnote](endnote1) Am I still infectious?Let's try a slightly more complicated example.The infographic above makes a two strong assumptions.First, it assumes that the incubation period is 14 days,so that a person stops being infectious 14 days after the pick up the virus.Second, it assumes that on Day 1, when Mum gets sick,everyone else in the family picks up the virus from her immediately.I don't know what this is called, so let's call it the *acquisition time*:this infographic assumes an acquisition time of 0 days.Third, it assumes that once you show symptoms, you're contagious for 7 days.We'll call this the *recovery time*, although we won't worry about it in this post.These are reasonable assumptions, since these are our best estimates for how these things work.However, in reality we're uncertain about all of these things,since these things will take longer in some cases than in others.It also assumes that the recovery time starts counting down from the day you first show symptoms,rather than the day you stop showing symptoms.I don't know if that's a good assumption -what if your symptoms last more than a week? - but I'm going to ignore it for now. We can make the model used in this infographic explicit as follows.Let's call the acquisition period $\tau_A$,and incubation period $\tau_B$.$d_1$ is the day on which the first person in the family (Mum) shows symptoms.The graphic assumes $\tau_A = 0$, and $\tau_B = 14$.If you do not develop symptoms,the model states that you are contagious until day $\tau_C = d_1 + \tau_A + \tau_B$(day of first infection plus time taken to acquire the virus yourself plustime it would take for symptoms to develop if you have acquired it),and not contagious afterwards.Note we're not using the same notation as earlier in this post.We've already seen that our best estimate of the incubation time is just$$\tau_B \sim \text{Log-Normal}(1.621, 0.418)$$
###Code
histogram(incubation_times, 'Incubation times (β)', bins=range(20))
###Output
_____no_output_____
###Markdown
I don't know of any empirical estimates of the acquisition time $\alpha$, which is assumed to be 0 days here.It's useful to reframe this in terms of the *acquisition probability*, $\theta_A$:if you wake up without the virus, but someone in your house has it,what is the probability that you'll catch it that day? The acquisition time, $\tau_A$, follows a [geometric distribution](https://en.wikipedia.org/wiki/Geometric_distribution) (the first kind discussed on the Wikipedia page), with success probability $\theta_A$("success" here meaning you successfully acquire the virus. I didn't choose this terminology).The average acquisition time in this case is just $\frac{1}{\theta_A}$.
###Code
thetas = [.25, .5, .75, .9999]
times = np.arange(0, 7, 1)
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
plt.sca(axes[0])
for theta in thetas:
# Offset by one
d = stats.geom(theta).pmf(times+1)
plt.plot(times, d, '-o', label='$\\theta_A = $%.2f' % theta)
plt.xlabel('Days since first infection')
plt.ylim(0, 1.1)
plt.legend()
plt.ylabel('P(Get infected today)')
plt.sca(axes[1])
for theta in thetas:
d = stats.geom(theta).cdf(times+1)
plt.plot(times, d, '-o', label='$\\theta_A = $%.2f' % theta)
plt.xlabel('Days since first infection')
plt.ylim(0, 1.1)
plt.legend()
plt.ylabel('P(Infected by end of today)')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
For now, let's assume you have a 80% chance of picking up the virusfor every day you spend living with someone who is contagious:$\theta_A = 0.8$.We can then obtain a distribution over likely acquisition times, $\beta$by sampling from a geometric distribution with probability parameter $0.8$.$$\tau_A \sim \text{Geometric}(0.8)$$
###Code
def sample_acq_times(acq_prob, n=n):
return np.random.geometric(acq_prob, n) - 1 # Offset by one
acquisition_prob = .8
acquisition_times = sample_acq_times(acquisition_prob)
histogram(acquisition_times, 'Acquisition time in days ($\\tau_A$)', bins=range(0, 8))
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Combining DistributionsWe now have samples from distributions representing our uncertaintyabout the acquisition time $\tau_A$, and the incubation period $\tau_B$.We want to calculate the distribution of the isolation time, $\tau_C = d_1 + \tau_A + \tau_B$.To do this, we just apply this formula to each individual sample:$\tau_C^i = d_1 + \tau_A^i + \tau_B^i$, for $i$ in $1, 2, \dots, n$.
###Code
isolation_times = acquisition_times + incubation_times
histogram(isolation_times, 'Isolation time in days (τ = α + β)', bins=range(30))
###Output
_____no_output_____
###Markdown
With this distribution, we can ask answer questions like- What's the probability of still being contagious after 7 days? After 14 days?- How long must you wait to have a 95% chance of not being contagious?
###Code
def p_still_contagious(isolation_times, days):
'''What is the probability you\'re contagious after `days` days?'''
p = np.mean(isolation_times > days)
return 'P(Contagious) after %i days\t= %.3f' % (days, p)
for days in [2, 7, 14]:
print(p_still_contagious(isolation_times, days))
def days_until_probability(isolation_times, prob):
'''How many days must you wait to have this probability of being non-contagious?'''
days = np.percentile(isolation_times, prob*100) # Percentile expects values 0-100
return 'P(Not contagious) = %.3f after %i days' % (prob, days)
for prob in [.5, .75, .9, .99]:
print(days_until_probability(isolation_times, prob))
###Output
_____no_output_____
###Markdown
Of course, the value of $\theta = 0.8$ was only a guess.How much do our conclusions depend on these parameters?To find out, we create a function that takes these parameter values as inputs,and outputs a distribution over isolation times $\tau_C$,using the code above.
###Code
def infer_isolation_times(mu_incubation_time, sigma_incubation_time, acquisition_prob, n=10000):
incubation_times = np.random.lognormal(mu_incubation_time, sigma_incubation_time, n)
acquisition_times = sample_acq_times(acquisition_prob, n)
isolation_times = acquisition_times + incubation_times
return isolation_times
###Output
_____no_output_____
###Markdown
Better still, we can wrap this in another function that takes these parameters and produces a histogram and some summary values.
###Code
def plot_isolation_times(isolation_times):
fig, axes = plt.subplots(1, 2, figsize=(16, 4))
plt.sca(axes[0])
plt.hist(isolation_times, bins=20)
plt.xlabel('Isolation times (days)')
plt.yticks([])
plt.gca().axes.spines['left'].set_visible(False)
plt.sca(axes[1])
q = np.linspace(0, 1, 50)
d = np.percentile(isolation_times, q*100)
plt.ylim(0, 1.05)
plt.plot(d, q)
for ax in axes:
ax.set_xlim(0, 20)
ax.vlines(14, *ax.get_ylim(), linestyle='dashed')
plt.xlabel('Time since first symptoms in house')
plt.ylabel('P(No longer contagious)')
def show_isolation_times(mu_incubation_time=1.621,
sigma_incubation_time=0.418,
acquisition_prob=0.9):
isolation_times = infer_isolation_times(mu_incubation_time, sigma_incubation_time, acquisition_prob, n=1000)
plot_isolation_times(isolation_times)
show_isolation_times()
###Output
_____no_output_____
###Markdown
...and wrap the whole thing in an interactive widget.
###Code
interact(show_isolation_times,
mu_incubation_time = fs(1.6, 0, 5, .1, 'Incubation μ'),
sigma_incubation_time = fs(0.4, 0, 1, .1, 'Incubation σ'),
acquisition_prob = fs(.9, 0, 1, .1, 'Acquisition θ'));
###Output
_____no_output_____
###Markdown
As before, this problem can be solved analytically, without simulations.Unlike before, I'm not going to bother figuring out what it is this time.Finally, after playing with this interactive slider for a while,we can identify some effects.
###Code
def do_isolation_curve(mu_incubation_time=1.621,
sigma_incubation_time=0.418,
acquisition_prob=0.9,
*args, **kwargs):
'''Draw isolation time curve for these parameters'''
isolation_times = infer_isolation_times(mu_incubation_time, sigma_incubation_time, acquisition_prob, n=1000)
q = np.linspace(0, 1, 50)
d = np.percentile(isolation_times, q*100)
d[0] = 0; d[-1] = 20 # Hack to force curve to go to end of plot
label = 'μ = %.1f, σ = %.1f, θ = %.1f' % (mu_incubation_time, sigma_incubation_time, acquisition_prob)
plt.plot(d, q, label=label, *args, **kwargs)
mu_values = [1.4, 1.8]
sigma_values = [.2, .6]
theta_values = [.5, 1]
mu_default = 1.6
sigma_default = .4
theta_default = .8
default_pars = np.array([mu_default, sigma_default, theta_default])
alt_pars = [mu_values, sigma_values, theta_values]
titles = ['A. Effect of mean incubation time',
'B. Effect of variability in incubation time',
'C. Effect of acquisition probability' ]
pal = iter(sns.palettes.color_palette(n_colors=6))
fig, axes = plt.subplots(1, 3, figsize=(20, 5))
for i in range(3):
plt.subplot(axes[i])
for j in range(2):
pars = np.copy(default_pars)
pars[i] = alt_pars[i][j]
do_isolation_curve(*pars, color=next(pal))
plt.xlabel('Time since first symptoms in house')
plt.ylabel('P(No longer contagious)')
plt.xlim(0, 20)
plt.vlines(14, *plt.ylim(), linestyle='dashed')
plt.legend()
plt.title(titles[i])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
- **A.** If the incubation time is longer than we think, people should isolate for longer. This should be obvious, since ($\tau_C = \tau_A + \tau_B$)- **B.** If the incubation time is highly variable, your chances of still being contagious in the first few days are reduced, but the chance of still being contagious after 5 days or more is increased.- **C.** If transmission within a household is slower, the isolation period needs to be longer. Decision TheoryI was going to finish by talking about *Bayesian decision theory*,an extension of this framework that allows us to plug these simulations into cost-benefit analysis.However, this post is already far too long,so instead I'll close, and maybe return to the topic some other day. Endnotes Analytic Solution for Am I Infectious?We can work out the analytic solution here, if we really want to.Consider the path diagram below.If someone has no symptoms by day $d$,they either are infected but haven't developed symptoms yet (**outcome B**; red),or they aren't sick (**outcome C**; green).This means the probability they're infected is$$P(\text{Infected}) = \frac{P(B)}{P(B) + P(C)}$$Since we believe $\tau$ follows a log normal distribution$\text{Log-Normal}(1.621, 0.418)$, we know that$$p(\tau \gt d) = \int_0^d{f(t\ \mid\ 1.621, 0.418)}dt$$where $f(t\ \mid\ \mu, \sigma)$ is the log-normal probability density function.Putting this together, we find$$\begin{align}P(\text{Infected}) &= \frac{P(B)}{P(B) + P(C)}\\ &= \frac{\alpha p(\tau > d)}{\alpha p(\tau > d) + 1 - \alpha}\\ &= \frac{\alpha \int_0^d{f(t\ \mid\ 1.621, 0.418)}dt} {\alpha \int_0^d{f(t\ \mid\ 1.621, 0.418)}dt + 1 - \alpha}\end{align}$$Is all this right?I think so, but this is more algebra than I'm used to doing,so let's confirm by visualising it again.
###Code
def plot_p_infected_analytic(alpha: float,
incubation_mu: float=1.6, incubation_sigma: float=0.4):
'''Same as `plot_p_infected`, but using analytic solution.
'''
days = np.arange(0, 20)
incubation_time_distribution = stats.lognorm(loc=0,
scale=np.exp(incubation_mu),
s=incubation_sigma)
# Find P(𝜏 < d) from the cumulative distribution function, and invert it.
prob_A = alpha * (1 - incubation_time_distribution.cdf(days))
prob_B = (1 - alpha)
prob_infected = prob_A / (prob_A + prob_B)
plt.plot(days, prob_infected)
plt.ylim(0, 1)
plt.xlabel('Days since contact')
plt.ylabel(u'P(Infected | 😊)')
plt.show()
interact(plot_p_infected_analytic,
alpha = fs(.5, 0, 1, .1, 'α'),
incubation_mu = fs(1.6, 0, 5, .1, 'Incubation μ'),
incubation_sigma = fs(0.4, 0, 1, .1, 'Incubation σ'));
###Output
_____no_output_____ |
Learning Notes/Learning Notes ML - 3 Parameters and Model Validation.ipynb | ###Markdown
HYPERPARAMETERS AND MODEL VALIDATION
###Code
The first two steps of supervised ML are
1 Choose a class of model > import the appropriate estimator class from Scikit-Learn.
2 Choose model hyperparameters > instantiate the class with desired values.
###Output
_____no_output_____
###Markdown
Model Validation Hold Out Set
###Code
from sklearn.datasets import load_iris
iris = load_iris()
X = iris['data']
y = iris['target']
# Here we'll use a k-neighbors classifier with n_neighbors=1.
# This is a very simple and intuitive model that says "the label of an unknown point is the same as the label of its closest training point:"
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
# split the data with 50% in each set
from sklearn.model_selection import train_test_split
X1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5)
# fit the model on one set of data
model.fit(X1, y1)
# evaluate the model on the second set of data
from sklearn.metrics import accuracy_score
y2_model = model.predict(X2)
accuracy_score(y2, y2_model)
###Output
_____no_output_____
###Markdown
Model validation via cross-validation
###Code
The issue with Hold Out method is that we have not used a good portion of the dataset for training.
One way to address this is to use cross-validation, where we do sequence of fits where each data subset is used both as a training set and as a validation set
# Here we do two validation trials, alternately using each half of the data as a holdout set.
# Using the split data from before, we could implement it like this:
y2_model = model.fit(X1, y1).predict(X2)
y1_model = model.fit(X2, y2).predict(X1)
accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
Expanding this 2-Fold cross validation into 5 groups can be done manually or we can use Scikit-Learn cross_val_score convenience routine
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5)
Scikit-Learn implements a number of useful cross-validation schemes that are useful in particular situations.
These are implemented via iterators in the cross_validation module.
For example, we might wish to go to the extreme case in which our number of folds is equal to the number of data points.
Here, we train on all points but one in each trial. This type of cross-validation is known as leave-one-out cross validation, and can be used as follows:
from sklearn.model_selection import LeaveOneOut
scores = cross_val_score(model, X, y, cv=LeaveOneOut())
scores
scores.mean()
###Output
_____no_output_____
###Markdown
Selecting the best model
###Code
The core importance is the following question: if our estimator is underperforming, how should we move forward?
There are several possible answers:
- Use a more complicated/more flexible model
- Use a less complicated/less flexible model
- Gather more training samples
- Gather more data to add features to each sample
The answer to this question is often counter-intuitive.
- Sometimes using a more complicated model will give worse results, and adding more training samples may not improve your results
- The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful.
###Output
_____no_output_____
###Markdown
The Bias-Variance trade-off
###Code
Fundamentally, the question of "the best model" is about finding a sweet spot in the tradeoff between bias and variance.
Consider the following figure, which presents two regression fits to the same dataset:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
def my_data(N=30, err=0.8, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
X, y = my_data()
xfit = np.linspace(-0.1, 1.0, 1000)[:, None]
model1 = PolynomialRegression(1).fit(X, y)
model20 = PolynomialRegression(20).fit(X, y)
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1, )
ax[0].scatter(X.ravel(), y, s=40)
ax[0].plot(xfit.ravel(), model1.predict(xfit), color='gray')
ax[0].axis([-0.1, 1.0, -2, 14])
ax[0].set_title('High-bias model: Underfits the data', size=14)
ax[1].scatter(X.ravel(), y, s=40)
ax[1].plot(xfit.ravel(), model20.predict(xfit), color='gray')
ax[1].axis([-0.1, 1.0, -2, 14])
ax[1].set_title('High-variance model: Overfits the data', size=14)
Now lets add some new data points. The red points indicate data that is omitted from the training set.
fig, ax = plt.subplots(1, 2, figsize=(10,4))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
X2, y2 = my_data(10, rseed=42)
ax[0].scatter(X.ravel(), y, s=40, c='blue')
ax[0].plot(xfit.ravel(), model1.predict(xfit), color='gray')
ax[0].axis([-0.1, 1.0, -2, 14])
ax[0].set_title('High-bias model: Underfits the data', size=14)
ax[0].scatter(X2.ravel(), y2, s=40, c='red')
ax[0].text(0.02, 0.98, "training score: $R^2$ = {0:.2f}".format(model1.score(X, y)),
ha='left', va='top', transform=ax[0].transAxes, size=14, color='blue')
ax[0].text(0.02, 0.91, "validation score: $R^2$ = {0:.2f}".format(model1.score(X2, y2)),
ha='left', va='top', transform=ax[0].transAxes, size=14, color='red')
ax[1].scatter(X.ravel(), y, s=40, c='blue')
ax[1].plot(xfit.ravel(), model20.predict(xfit), color='gray')
ax[1].axis([-0.1, 1.0, -2, 14])
ax[1].set_title('High-variance model: Overfits the data', size=14)
ax[1].scatter(X2.ravel(), y2, s=40, c='red')
ax[1].text(0.02, 0.98, "training score: $R^2$ = {0:.2g}".format(model20.score(X, y)),
ha='left', va='top', transform=ax[1].transAxes, size=14, color='blue')
ax[1].text(0.02, 0.91, "validation score: $R^2$ = {0:.2g}".format(model20.score(X2, y2)),
ha='left', va='top', transform=ax[1].transAxes, size=14, color='red')
- For high-bias models, the performance of the model on the validation set is similar to the performance on the training set.
- For high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.
x = np.linspace(0, 1, 1000)
y1 = -(x - 0.5) ** 2
y2 = y1 - 0.33 + np.exp(x - 1)
fig, ax = plt.subplots()
ax.plot(x, y2, lw=7, alpha=0.5, color='blue')
ax.plot(x, y1, lw=7, alpha=0.5, color='red')
ax.text(0.15, 0.075, "training score", rotation=45, size=12, color='blue')
ax.text(0.2, -0.05, "validation score", rotation=20, size=12, color='red')
ax.text(0.02, 0.1, r'$\longleftarrow$ High Bias', size=12, rotation=90, va='center')
ax.text(0.98, 0.1, r'$\longleftarrow$ High Variance $\longrightarrow$', size=12, rotation=90, ha='right', va='center')
ax.text(0.48, -0.12, 'Best$\\longrightarrow$\nModel', size=12, rotation=90, va='center')
ax.set_xlim(0, 1)
ax.set_ylim(-0.3, 0.5)
ax.set_xlabel(r'model complexity $\longrightarrow$', size=14)
ax.set_ylabel(r'model score $\longrightarrow$', size=14)
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.set_title("Validation Curve Schematic", size=16)
Validation curve (above):
- The training score is everywhere higher than the validation score. This is generally the case
- For very low model complexity (a high-bias model), the training data is under-fit: poor predictor both for the training and new data
- For very high model complexity (a high-variance model), the training data is over-fits and fails for any previously unseen data.
- For some intermediate value, the validation curve has a maximum. This level of complexity indicates a suitable trade-off between bias and variance.
- The means of tuning the model complexity varies from model to model
###Output
_____no_output_____
###Markdown
Best model - an attempt
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
import numpy as np
def make_data(N, err=1.0, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
X, y = make_data(40)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # plot formatting
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
plt.scatter(X.ravel(), y, color='black')
axis = plt.axis()
for degree in [1, 3, 5]:
y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
plt.xlim(-0.1, 1.0)
plt.ylim(-2, 12)
plt.legend(loc='best');
We can make progress in this by visualizing the validation curve for this particular data and model
- this can be done straightforwardly using the validation_curve convenience routine provided by Scikit-Learn
- we just need to provide a model, data, parameter name, and a range to explore
- this function will automatically compute both the training score and validation score across the range:
from sklearn.model_selection import validation_curve
degree = np.arange(0, 21)
train_score, val_score = validation_curve(PolynomialRegression(), X, y,'polynomialfeatures__degree', degree, cv=7)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
This shows precisely the qualitative behavior we expect:
- the training score is everywhere higher than the validation score
- the training score is monotonically improving with increased model complexity
- the validation score reaches a maximum before dropping off as the model becomes over-fit.
From the validation curve, we can read-off that the optimal trade-off between bias and variance is found for a third-order polynomial.
# we can compute and display this fit over the original data as follows:
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
###Output
_____no_output_____
###Markdown
Learning Curves
###Code
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data.
For example, lets generate a new dataset with a factor of five more points:
X2, y2 = my_data(200)
plt.scatter(X2.ravel(), y2);
# We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well:
degree = np.arange(21)
train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2, 'polynomialfeatures__degree', degree, cv=7)
plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
plt.plot(degree, np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
plt.plot(degree, np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
plt.legend(loc='lower center')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score')
The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset.
It is clear from the validation curve that the larger dataset can support a much more complicated model:
- the peak here is probably around a degree of 6
- even a degree-20 model is not seriously over-fitting the data—the validation and training scores remain very close.
Thus we see that the behavior of the validation curve has not one but two important inputs:
- the model complexity
- the number of training points.
It is often useful to to explore the behavior of the model as a function of the number of training points.
A plot of the training/validation score with respect to the size of the training set is known as a learning curve.
The general behavior we would expect from a learning curve is this:
- A model of a given complexity will overfit a small dataset: training score will be relatively high / validation score will be relatively low.
- A model of a given complexity will underfit a large dataset: training score will decrease / the validation score will increase.
- A model will never (except by chance) give a better score to the validation set than the training set
With these features in mind, we would expect a learning curve to look qualitatively like that shown in the following figure:
N = np.linspace(0, 1, 1000)
y1 = 0.75 + 0.2 * np.exp(-4 * N)
y2 = 0.7 - 0.6 * np.exp(-4 * N)
fig, ax = plt.subplots()
ax.plot(x, y2, lw=7, alpha=0.5, color='red')
ax.plot(x, y1, lw=7, alpha=0.5, color='blue')
ax.text(0.2, 0.82, "training score", rotation=-10, size=12, color='blue')
ax.text(0.2, 0.35, "validation score", rotation=30, size=12, color='red')
ax.text(0.98, 0.45, r'Good Fit $\longrightarrow$', size=12, rotation=90, ha='right', va='center')
ax.text(0.02, 0.57, r'$\longleftarrow$ High Variance $\longrightarrow$', size=12, rotation=90, va='center')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_xlabel(r'training set size $\longrightarrow$', size=12)
ax.set_ylabel(r'model score $\longrightarrow$', size=12)
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.set_title("Learning Curve Schematic", size=16)
The notable feature of the learning curve is the convergence to a particular score as the number of training samples grows.
In particular, once you have enough points that a particular model has converged, adding more training data will not help you!
The only way to increase model performance in this case is to use another (often more complex) model.
###Output
_____no_output_____
###Markdown
Learning Curves in Scikit-Learn
###Code
Scikit-Learn offers a convenient utility for computing such learning curves from your models.
Here we will compute a learning curve for our original dataset with a second-order polynomial model and a ninth-order polynomial:
from sklearn.model_selection import learning_curve
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for i, degree in enumerate([2, 9]):
N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),X, y, cv=7,train_sizes=np.linspace(0.3, 1, 25))
ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax[i].set_ylim(0, 1)
ax[i].set_xlim(N[0], N[-1])
ax[i].set_xlabel('training size')
ax[i].set_ylabel('score')
ax[i].set_title('degree = {0}'.format(degree), size=14)
ax[i].legend(loc='best')
This is a valuable diagnostic, because it gives us a visual depiction of how our model responds to increasing training data.
In particular, when your learning curve has already converged, adding more training data will not significantly improve the fit!
This situation is seen in the left panel, with the learning curve for the degree-2 model.
The only way to increase the converged score is to use a different (usually more complicated) model.
We see this in the right panel: by moving to a much more complicated model
- we increase the score of convergence (indicated by the dashed line),
- we incur higher model variance (indicated by the difference between the training and validation scores).
- we mitigate this by adding more data points to get the convergence
Plotting a learning curve can help to make this type of decision about how to move forward in improving the analysis.
###Output
_____no_output_____
###Markdown
Validation in Practice : Grid Search
###Code
In practice, models generally have more than one knob to turn (complexity/data).
Thus plots of validation and learning curves change from lines to multi-dimensional surfaces.
In these cases, such visualizations are difficult and we would rather simply find the particular model that maximizes the validation score.
Scikit-Learn provides automated tools to do this in the grid search module.
Here is an example of using grid search to find the optimal polynomial model.
We will explore a three-dimensional grid of model features:
- the polynomial degree
- the flag telling us whether to fit the intercept
- the flag telling us whether to normalize the problem.
This can be set up using Scikit-Learn's GridSearchCV meta-estimator:
from sklearn.model_selection import GridSearchCV
param_grid = {'polynomialfeatures__degree': np.arange(21),
'linearregression__fit_intercept': [True, False],
'linearregression__normalize': [True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
# Notice that like a normal estimator, this has not yet been applied to any data.
# Calling the fit() method will fit the model at each grid point, keeping track of the scores along the way:
grid.fit(X, y);
# Now that this is fit, we can ask for the best parameters as follows:
grid.best_params_
# lets use this then
model = grid.best_estimator_
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = model.fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
The grid search provides many more options, including:
- ability to specify a custom scoring function
- ability to to parallelize the computations
- ability to do randomized searches, and more
###Output
_____no_output_____ |
tasks/task_09_CSG_surface_tally_dose/1_surface_dose_from_gamma_source.ipynb | ###Markdown
Part 1 - Simulations of effective dose on a surfaceEffective dose is used to assess the potential for long-term radiation effects that might occur in the future.Effective dose provides a single number that reflects the exposure to radiation. To quote ICRP who define the quantity, "it sums up any number of different exposures into a single number that reflects, in a general way, the overall risk".Effective dose is a calculated value, measured in mSv. Effective dose is calculated for the whole body. It is the sum of equivalent doses to all organs, each adjusted to account for the sensitivity of the organ to radiation. Read more about equivalent dose, absorbed dose and effective dose on the ICRP website.http://icrpaedia.org/Absorbed,_Equivalent,_and_Effective_Dose The effective dose deposited by a neutron or photon depends on the energy of the particle. The dose coefficients provided by ICRP are energy dependant.The following section plots effective dose coefficient as a function of incident particle energy for neutrons and photons.
###Code
import openmc
import plotly.graph_objects as go
energy_bins_n, dose_coeffs_n = openmc.data.dose_coefficients(
particle='neutron',
geometry='AP' # AP defines the direction of the source to person, for more details see documentation https://docs.openmc.org/en/stable/pythonapi/generated/openmc.data.dose_coefficients.html
)
energy_bins_p, dose_coeffs_p = openmc.data.dose_coefficients(particle='photon', geometry='AP')
fig = go.Figure()
fig.update_layout(
title='ICRP Effective Dose Coefficient (AP)',
xaxis={'title': 'Energy (eV)',
'range': (0, 14.1e6)},
yaxis={'title': 'Effective dose per fluence, in units of pSv cm²'}
)
fig.add_trace(go.Scatter(
x=energy_bins_p,
y=dose_coeffs_p,
mode='lines',
name='photon'
))
fig.add_trace(go.Scatter(
x=energy_bins_n,
y=dose_coeffs_n,
mode='lines',
name='neutron'
))
###Output
_____no_output_____
###Markdown
To find the effective dose on a surface a geometry is needed along with some materials and a tally.The following section makes a 'cask' geometry and materials which will have a dose tally added to it later. This code block makes the material used for the cask.
###Code
steel = openmc.Material(name='steel')
steel.set_density('g/cm3', 7.75)
steel.add_element('Fe', 0.95, percent_type='wo')
steel.add_element('C', 0.05, percent_type='wo')
mats = openmc.Materials([steel])
###Output
_____no_output_____
###Markdown
This code block makes the CSG geometry for the cask.
###Code
height = 100
outer_radius = 50
thickness = 10
outer_cylinder = openmc.ZCylinder(r=outer_radius)
inner_cylinder = openmc.ZCylinder(r=outer_radius-thickness)
inner_top = openmc.ZPlane(z0=height*0.5)
inner_bottom = openmc.ZPlane(z0=-height*0.5)
outer_top = openmc.ZPlane(z0=(height*0.5)+thickness)
outer_bottom = openmc.ZPlane(z0=(-height*0.5)-thickness)
# this shpere_1 are used to tally the dose
sphere_1 = openmc.Sphere(r=100)
# can't actually tally on the end of universe sphere hence sphere 1 is needed
sphere_2 = openmc.Sphere(r=101, boundary_type='vacuum')
cylinder_region = -outer_cylinder & +inner_cylinder & -inner_top & +inner_bottom
cylinder_cell = openmc.Cell(region=cylinder_region)
cylinder_cell.fill = steel
top_cap_region = -outer_top & +inner_top & -outer_cylinder
top_cap_cell = openmc.Cell(region=top_cap_region)
top_cap_cell.fill = steel
bottom_cap_region = +outer_bottom & -inner_bottom & -outer_cylinder
bottom_cap_cell = openmc.Cell(region=bottom_cap_region)
bottom_cap_cell.fill = steel
inner_void_region = -inner_cylinder & -inner_top & +inner_bottom
inner_void_cell = openmc.Cell(region=inner_void_region)
# sphere 1 region is below -sphere_1 and not (~) in the other regions
sphere_1_region = -sphere_1
sphere_1_cell = openmc.Cell(
region= sphere_1_region
& ~bottom_cap_region
& ~top_cap_region
& ~cylinder_region
& ~inner_void_region
)
sphere_2_region = +sphere_1 & -sphere_2
sphere_2_cell = openmc.Cell(region= sphere_2_region)
universe = openmc.Universe(cells=[
inner_void_cell, cylinder_cell, top_cap_cell,
bottom_cap_cell, sphere_1_cell, sphere_2_cell])
geom = openmc.Geometry(universe)
###Output
_____no_output_____
###Markdown
This code block plots the geometry and colours regions to identify the cells / materials - useful for checking the geometry looks correct.
###Code
import matplotlib.pyplot as plt
color_assignment = {sphere_1_cell: 'grey',
sphere_2_cell: 'grey',
inner_void_cell: 'grey',
bottom_cap_cell: 'red',
top_cap_cell: 'blue',
cylinder_cell:'yellow',
}
x, y = 200, 200
plt.show(universe.plot(width=(x, y), basis='xz', colors=color_assignment))
plt.show(universe.plot(width=(x, y), basis='xy', colors=color_assignment))
###Output
_____no_output_____
###Markdown
This section makes the source. Note the use of the Co60 gamma source with two energy levels.
###Code
# Instantiate a Settings object
sett = openmc.Settings()
sett.batches = 10
sett.inactive = 0
sett.particles = 500
sett.run_mode = 'fixed source'
# Create a gamma point source
source = openmc.Source()
source.space = openmc.stats.Point((0, 0, 0))
source.angle = openmc.stats.Isotropic()
# This is a Co60 source, see the task on sources to understand it
source.energy = openmc.stats.Discrete([1.1732e6,1.3325e6], [0.5, 0.5])
source.particle = 'photon'
sett.source = source
###Output
_____no_output_____
###Markdown
Dose coeffients can then be used in a neutronics tally with the openmc.EnergyFunctionFilter.This will effectivly multilpy the particle energy spectra with the effictive dose coefficient to produce a single number for effective dose.
###Code
energy_function_filter_n = openmc.EnergyFunctionFilter(energy_bins_n, dose_coeffs_n)
energy_function_filter_p = openmc.EnergyFunctionFilter(energy_bins_p, dose_coeffs_p)
photon_particle_filter = openmc.ParticleFilter(["photon"])
surface_filter = openmc.SurfaceFilter(sphere_1)
tallies = openmc.Tallies()
dose_tally = openmc.Tally(name="dose_tally_on_surface")
dose_tally.scores = ["current"]
dose_tally.filters = [
surface_filter,
photon_particle_filter,
energy_function_filter_p,
]
tallies.append(dose_tally)
###Output
_____no_output_____
###Markdown
This code block runs the simulations.
###Code
# Run OpenMC!
model = openmc.model.Model(geom, mats, sett, tallies)
!rm *.h5
sp_filename = model.run()
###Output
_____no_output_____
###Markdown
The following section extracts the tally result of the simulation and post-processes it to calculate the dose rate.The cell tally has units of pSv cm² per source particle (p is pico). Therefore, the tally result must be divided by the surface area of the sphere to make the units into pSv, and then multiplied by the activity (in Bq) of the source to get pSv per second.
###Code
import math
# open the results file
sp = openmc.StatePoint(sp_filename)
# access the tally using pandas dataframes
tally = sp.get_tally(name='dose_tally_on_surface')
df = tally.get_pandas_dataframe()
tally_result = df['mean'].sum()
tally_std_dev = df['std. dev.'].sum()
# convert from the tally output units of pSv cm² to pSv by dividing by the surface area of the surface
dose_in_pSv = tally_result / (4 * math.pi * math.pow(200, 2))
source_activity = 56000 # in decays per second (Bq)
emission_rate = 2 # the number of gammas emitted per decay which is approximately 2 for Co60
gamma_per_second = source_activity * emission_rate
dose_rate_in_pSv = dose_in_pSv * gamma_per_second
# print results
print('The surface dose = ', dose_rate_in_pSv, 'pico Sv per second')
###Output
_____no_output_____
###Markdown
Part 1 - Simulations of effective dose on a surfaceEffective dose is used to assess the potential for long-term radiation effects that might occur in the future.Effective dose provides a single number that reflects the exposure to radiation. To quote ICRP who define the quantity, "it sums up any number of different exposures into a single number that reflects, in a general way, the overall risk".Effective dose is a calculated value, measured in mSv. Effective dose is calculated for the whole body. It is the sum of equivalent doses to all organs, each adjusted to account for the sensitivity of the organ to radiation. Read more about equivalent dose, absorbed dose and effective dose on the ICRP website.http://icrpaedia.org/Absorbed,_Equivalent,_and_Effective_Dose The effective dose deposited by a neutron or photon depends on the energy of the particle. The dose coefficients provided by ICRP are energy dependant.The following section plots effective dose coefficient as a function of incident particle energy for neutrons and photons.
###Code
import openmc
import plotly.graph_objects as go
energy_bins_n, dose_coeffs_n = openmc.data.dose_coefficients(
particle='neutron',
geometry='AP' # AP defines the direction of the source to person, for more details see documentation https://docs.openmc.org/en/stable/pythonapi/generated/openmc.data.dose_coefficients.html
)
energy_bins_p, dose_coeffs_p = openmc.data.dose_coefficients(particle='photon', geometry='AP')
fig = go.Figure()
fig.update_layout(
title='ICRP Effective Dose Coefficient (AP)',
xaxis={'title': 'Energy (eV)',
'range': (0, 14.1e6)},
yaxis={'title': 'Effective dose per fluence, in units of pSv cm²'}
)
fig.add_trace(go.Scatter(
x=energy_bins_p,
y=dose_coeffs_p,
mode='lines',
name='photon'
))
fig.add_trace(go.Scatter(
x=energy_bins_n,
y=dose_coeffs_n,
mode='lines',
name='neutron'
))
###Output
_____no_output_____
###Markdown
To find the effective dose on a surface a geometry is needed along with some materials and a tally.The following section makes a 'cask' geometry and materials which will have a dose tally added to it later. This code block makes the material used for the cask.
###Code
steel = openmc.Material(name='steel')
steel.set_density('g/cm3', 7.75)
steel.add_element('Fe', 0.95, percent_type='wo')
steel.add_element('C', 0.05, percent_type='wo')
mats = openmc.Materials([steel])
###Output
_____no_output_____
###Markdown
This code block makes the CSG geometry for the cask.
###Code
height = 100
outer_radius = 50
thickness = 10
outer_cylinder = openmc.ZCylinder(r=outer_radius)
inner_cylinder = openmc.ZCylinder(r=outer_radius-thickness)
inner_top = openmc.ZPlane(z0=height*0.5)
inner_bottom = openmc.ZPlane(z0=-height*0.5)
outer_top = openmc.ZPlane(z0=(height*0.5)+thickness)
outer_bottom = openmc.ZPlane(z0=(-height*0.5)-thickness)
# this shpere_1 are used to tally the dose
sphere_1 = openmc.Sphere(r=100)
# can't actually tally on the end of universe sphere hence sphere 1 is needed
sphere_2 = openmc.Sphere(r=101, boundary_type='vacuum')
cylinder_region = -outer_cylinder & +inner_cylinder & -inner_top & +inner_bottom
cylinder_cell = openmc.Cell(region=cylinder_region)
cylinder_cell.fill = steel
top_cap_region = -outer_top & +inner_top & -outer_cylinder
top_cap_cell = openmc.Cell(region=top_cap_region)
top_cap_cell.fill = steel
bottom_cap_region = +outer_bottom & -inner_bottom & -outer_cylinder
bottom_cap_cell = openmc.Cell(region=bottom_cap_region)
bottom_cap_cell.fill = steel
inner_void_region = -inner_cylinder & -inner_top & +inner_bottom
inner_void_cell = openmc.Cell(region=inner_void_region)
# sphere 1 region is below -sphere_1 and not (~) in the other regions
sphere_1_region = -sphere_1
sphere_1_cell = openmc.Cell(
region= sphere_1_region
& ~bottom_cap_region
& ~top_cap_region
& ~cylinder_region
& ~inner_void_region
)
sphere_2_region = +sphere_1 & -sphere_2
sphere_2_cell = openmc.Cell(region= sphere_2_region)
universe = openmc.Universe(cells=[
inner_void_cell, cylinder_cell, top_cap_cell,
bottom_cap_cell, sphere_1_cell, sphere_2_cell])
geom = openmc.Geometry(universe)
###Output
_____no_output_____
###Markdown
This code block plots the geometry and colours regions to identify the cells / materials - useful for checking the geometry looks correct.
###Code
import matplotlib.pyplot as plt
color_assignment = {sphere_1_cell: 'grey',
sphere_2_cell: 'grey',
inner_void_cell: 'grey',
bottom_cap_cell: 'red',
top_cap_cell: 'blue',
cylinder_cell:'yellow',
}
x, y = 200, 200
plt.show(universe.plot(width=(x, y), basis='xz', colors=color_assignment))
plt.show(universe.plot(width=(x, y), basis='xy', colors=color_assignment))
###Output
_____no_output_____
###Markdown
This section makes the source. Note the use of the Co60 gamma source with two energy levels.
###Code
# Instantiate a Settings object
sett = openmc.Settings()
sett.batches = 10
sett.inactive = 0
sett.particles = 500
sett.run_mode = 'fixed source'
# Create a DT point source
source = openmc.Source()
source.space = openmc.stats.Point((0, 0, 0))
source.angle = openmc.stats.Isotropic()
# This is a Co60 source, see the task on sources to understand it
source.energy = openmc.stats.Discrete([1.1732e6,1.3325e6], [0.5, 0.5])
source.particle = 'photon'
sett.source = source
###Output
_____no_output_____
###Markdown
Dose coeffients can then be used in a neutronics tally with the openmc.EnergyFunctionFilter.This will effectivly multilpy the particle energy spectra with the effictive dose coefficient to produce a single number for effective dose.
###Code
energy_function_filter_n = openmc.EnergyFunctionFilter(energy_bins_n, dose_coeffs_n)
energy_function_filter_p = openmc.EnergyFunctionFilter(energy_bins_p, dose_coeffs_p)
photon_particle_filter = openmc.ParticleFilter(["photon"])
surface_filter = openmc.SurfaceFilter(sphere_1)
tallies = openmc.Tallies()
dose_tally = openmc.Tally(name="dose_tally_on_surface")
dose_tally.scores = ["current"]
dose_tally.filters = [
surface_filter,
photon_particle_filter,
energy_function_filter_p,
]
tallies.append(dose_tally)
###Output
_____no_output_____
###Markdown
This code block runs the simulations.
###Code
# Run OpenMC!
model = openmc.model.Model(geom, mats, sett, tallies)
!rm *.h5
sp_filename = model.run()
###Output
_____no_output_____
###Markdown
The following section extracts the tally result of the simulation and post-processes it to calculate the dose rate.The cell tally has units of pSv cm² per source particle (p is pico). Therefore, the tally result must be divided by the surface area of the sphere to make the units into pSv, and then multiplied by the activity (in Bq) of the source to get pSv per second.
###Code
import math
# open the results file
sp = openmc.StatePoint(sp_filename)
# access the tally using pandas dataframes
tally = sp.get_tally(name='dose_tally_on_surface')
df = tally.get_pandas_dataframe()
tally_result = df['mean'].sum()
tally_std_dev = df['std. dev.'].sum()
# convert from the tally output units of pSv cm² to pSv by dividing by the surface area of the surface
dose_in_pSv = tally_result / (4 * math.pi * math.pow(200, 2))
source_activity = 56000 # in decays per second (Bq)
dose_rate_in_pSv = dose_in_pSv * 56000
# print results
print('The surface dose = ', dose_rate_in_pSv, 'pico Sv per second')
###Output
_____no_output_____
###Markdown
Part 1 - Simulations of effective dose on a surfaceEffective dose is used to assess the potential for long-term radiation effects that might occur in the future.Effective dose provides a single number that reflects the exposure to radiation. To quote ICRP who define the quantity, "it sums up any number of different exposures into a single number that reflects, in a general way, the overall risk".Effective dose is a calculated value, measured in mSv. Effective dose is calculated for the whole body. It is the sum of equivalent doses to all organs, each adjusted to account for the sensitivity of the organ to radiation. Read more about equivalent dose, absorbed dose and effective dose on the ICRP website.http://icrpaedia.org/Absorbed,_Equivalent,_and_Effective_Dose The effective dose deposited by a neutron or photon depends on the energy of the particle. The dose coefficients provided by ICRP are energy dependant.The following section plots effective dose coefficient as a function of incident particle energy for neutrons and photons.
###Code
import openmc
import plotly.graph_objects as go
energy_bins_n, dose_coeffs_n = openmc.data.dose_coefficients(
particle='neutron',
geometry='AP' # AP defines the direction of the source to person, for more details see documentation https://docs.openmc.org/en/stable/pythonapi/generated/openmc.data.dose_coefficients.html
)
energy_bins_p, dose_coeffs_p = openmc.data.dose_coefficients(particle='photon', geometry='AP')
fig = go.Figure()
fig.update_layout(
title='ICRP Effective Dose Coefficient (AP)',
xaxis={'title': 'Energy (eV)',
'range': (0, 14.1e6)},
yaxis={'title': 'Effective dose per fluence, in units of pSv cm²'}
)
fig.add_trace(go.Scatter(
x=energy_bins_p,
y=dose_coeffs_p,
mode='lines',
name='photon'
))
fig.add_trace(go.Scatter(
x=energy_bins_n,
y=dose_coeffs_n,
mode='lines',
name='neutron'
))
###Output
_____no_output_____
###Markdown
To find the effective dose on a surface a geometry is needed along with some materials and a tally.The following section makes a 'cask' geometry and materials which will have a dose tally added to it later. This code block makes the material used for the cask.
###Code
steel = openmc.Material(name='steel')
steel.set_density('g/cm3', 7.75)
steel.add_element('Fe', 0.95, percent_type='wo')
steel.add_element('C', 0.05, percent_type='wo')
mats = openmc.Materials([steel])
###Output
_____no_output_____
###Markdown
This code block makes the CSG geometry for the cask.
###Code
height = 100
outer_radius = 50
thickness = 10
outer_cylinder = openmc.ZCylinder(r=outer_radius)
inner_cylinder = openmc.ZCylinder(r=outer_radius-thickness)
inner_top = openmc.ZPlane(z0=height*0.5)
inner_bottom = openmc.ZPlane(z0=-height*0.5)
outer_top = openmc.ZPlane(z0=(height*0.5)+thickness)
outer_bottom = openmc.ZPlane(z0=(-height*0.5)-thickness)
# this shpere_1 are used to tally the dose
sphere_1 = openmc.Sphere(r=100)
# can't actually tally on the end of universe sphere hence sphere 1 is needed
sphere_2 = openmc.Sphere(r=101, boundary_type='vacuum')
cylinder_region = -outer_cylinder & +inner_cylinder & -inner_top & +inner_bottom
cylinder_cell = openmc.Cell(region=cylinder_region)
cylinder_cell.fill = steel
top_cap_region = -outer_top & +inner_top & -outer_cylinder
top_cap_cell = openmc.Cell(region=top_cap_region)
top_cap_cell.fill = steel
bottom_cap_region = +outer_bottom & -inner_bottom & -outer_cylinder
bottom_cap_cell = openmc.Cell(region=bottom_cap_region)
bottom_cap_cell.fill = steel
inner_void_region = -inner_cylinder & -inner_top & +inner_bottom
inner_void_cell = openmc.Cell(region=inner_void_region)
# sphere 1 region is below -sphere_1 and not (~) in the other regions
sphere_1_region = -sphere_1
sphere_1_cell = openmc.Cell(
region= sphere_1_region
& ~bottom_cap_region
& ~top_cap_region
& ~cylinder_region
& ~inner_void_region
)
sphere_2_region = +sphere_1 & -sphere_2
sphere_2_cell = openmc.Cell(region= sphere_2_region)
universe = openmc.Universe(cells=[
inner_void_cell, cylinder_cell, top_cap_cell,
bottom_cap_cell, sphere_1_cell, sphere_2_cell])
geom = openmc.Geometry(universe)
###Output
_____no_output_____
###Markdown
This code block plots the geometry and colours regions to identify the cells / materials - useful for checking the geometry looks correct.
###Code
import matplotlib.pyplot as plt
color_assignment = {sphere_1_cell: 'grey',
sphere_2_cell: 'grey',
inner_void_cell: 'grey',
bottom_cap_cell: 'red',
top_cap_cell: 'blue',
cylinder_cell:'yellow',
}
x, y = 200, 200
plt.show(universe.plot(width=(x, y), basis='xz', colors=color_assignment))
plt.show(universe.plot(width=(x, y), basis='xy', colors=color_assignment))
###Output
_____no_output_____
###Markdown
This section makes the source. Note the use of the Co60 gamma source with two energy levels.
###Code
# Instantiate a Settings object
sett = openmc.Settings()
sett.batches = 10
sett.inactive = 0
sett.particles = 500
sett.run_mode = 'fixed source'
# Create a gamma point source
source = openmc.Source()
source.space = openmc.stats.Point((0, 0, 0))
source.angle = openmc.stats.Isotropic()
# This is a Co60 source, see the task on sources to understand it
source.energy = openmc.stats.Discrete([1.1732e6,1.3325e6], [0.5, 0.5])
source.particle = 'photon'
sett.source = source
###Output
_____no_output_____
###Markdown
Dose coeffients can then be used in a neutronics tally with the openmc.EnergyFunctionFilter.This will effectivly multilpy the particle energy spectra with the effictive dose coefficient to produce a single number for effective dose.
###Code
energy_function_filter_n = openmc.EnergyFunctionFilter(energy_bins_n, dose_coeffs_n)
energy_function_filter_p = openmc.EnergyFunctionFilter(energy_bins_p, dose_coeffs_p)
photon_particle_filter = openmc.ParticleFilter(["photon"])
surface_filter = openmc.SurfaceFilter(sphere_1)
tallies = openmc.Tallies()
dose_tally = openmc.Tally(name="dose_tally_on_surface")
dose_tally.scores = ["current"]
dose_tally.filters = [
surface_filter,
photon_particle_filter,
energy_function_filter_p,
]
tallies.append(dose_tally)
###Output
_____no_output_____
###Markdown
This code block runs the simulations.
###Code
# Run OpenMC!
model = openmc.model.Model(geom, mats, sett, tallies)
!rm *.h5
sp_filename = model.run()
###Output
_____no_output_____
###Markdown
The following section extracts the tally result of the simulation and post-processes it to calculate the dose rate.The cell tally has units of pSv cm² per source particle (p is pico). Therefore, the tally result must be divided by the surface area of the sphere to make the units into pSv, and then multiplied by the activity (in Bq) of the source to get pSv per second.
###Code
import math
# open the results file
sp = openmc.StatePoint(sp_filename)
# access the tally using pandas dataframes
tally = sp.get_tally(name='dose_tally_on_surface')
df = tally.get_pandas_dataframe()
tally_result = df['mean'].sum()
tally_std_dev = df['std. dev.'].sum()
# convert from the tally output units of pSv cm² to pSv by dividing by the surface area of the surface
dose_in_pSv = tally_result / (4 * math.pi * math.pow(200, 2))
source_activity = 56000 # in decays per second (Bq)
emission_rate = 2 # the number of gammas emitted per decay which is approximately 2 for Co60
gamma_per_second = source_activity * emission_rate
dose_rate_in_pSv = dose_in_pSv * gamma_per_second
# print results
print('The surface dose = ', dose_rate_in_pSv, 'pico Sv per second')
###Output
_____no_output_____ |
Flask_Machine_Learning_Project.ipynb | ###Markdown
END TO END Machine learning Project deployment on Flask serverData Scientists Jr.: Dr.Eddy Giusepe Chirinos Isidro
###Code
from google.colab import drive
drive.mount('/content/drive')
# Importamos as nossas bibliotecas
import pandas as pd
import numpy as np
from sklearn import linear_model
import warnings
warnings.filterwarnings("ignore")
# Reading CSV
df = pd.read_csv('/content/drive/MyDrive/5_Scripts_in_Python_Eddy/End-to-End_ML_Project_deployment_on_Flask_server/car_data.csv')
df.sample(10)
# shape
df.shape
# Creating DataFrame
inputs = df.drop(['Car_Name', 'Seller_Type'], axis='columns')
target = df.Selling_Price
target
df.head()
# Encoding
from sklearn.preprocessing import LabelEncoder
Numerics = LabelEncoder()
# New encoded columns
inputs['Fuel_Type_n'] = Numerics.fit_transform(inputs['Fuel_Type'])
inputs['Transmission_n'] = Numerics.fit_transform(inputs['Transmission'])
inputs
# Dropping string columns
inputs_n = inputs.drop(['Fuel_Type', 'Transmission','Selling_Price'], axis='columns')
inputs_n.head(6)
# Em base a estas 6 características fazeremos as nossas previsões.
inputs_n.shape
# Linear Regression
model = linear_model.LinearRegression()
# Training
model.fit(inputs_n, target)
# prediction
pred = model.predict([[2014, 5.59, 27000, 0, 2, 1]])
print(pred)
###Output
[3.42552752]
###Markdown
Precisamos converter este código grande em um único arquivo, em **Python** existe uma biblioteca chamada ``pickle``, é uma famosa biblioteca em Python. A maioria de Data Scientists usaram essa biblioteca para converter seu modelo de aprendizado de máquina em um único arquivo.
###Code
# Importamos a biblioteca
import pickle
###Output
_____no_output_____
###Markdown
``pickle.dump()``: e neste objeto você precisa passar os dois argumentoso primeiro é o modelo de aprendizado de máquina que você já criou e depois deve nomear este arquivo como o ``model.pkl``. Ao executar este código, você pode ver o arquivo model.pkl no diretóriodo notebook jupyter Agora vamos fazer o upload deste arquivo para o ``servidor flask``.
###Code
pickle.dump(model, open('model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Antes disso, precisamos verificar como esse arquivo de modelo funciona e como ele prevê os dados. Digite:> pickle.load(open(('model.pkl', 'rb'))Esta sintaxe irá ajudá-lo a carregar o arquivo pickle.Vamos verificar como ele vai prever os preços dos carros. Então aqui eu dei os seis valores cada valoer representa características que estão presentes no conjunto de dados. Pode ver o preço previsto do carro!Nosso modelo de aprendizado de máquina funciona perfeitamente. Vamos fazer o upload desse modelo no ``servidor flask``.
###Code
pickle.load(open('/content/drive/MyDrive/5_Scripts_in_Python_Eddy/End-to-End_ML_Project_deployment_on_Flask_server/model.pkl', 'rb'))
# prediction com o nosso modelo que carregamos
print(model.predict([[2014, 5.59, 27000, 0, 2, 1]]))
print(model.predict([[2013, 9.54, 43000, 0, 1, 1]]))
###Output
[6.45188696]
|
Data-Lake/notebooks/2_spark_maps_and_lazy_evaluation.ipynb | ###Markdown
MapsIn Spark, maps take data as input and then transform that data with whatever function you put in the map. They are like directions for the data telling how each input should get to the output.The first code cell creates a SparkContext object. With the SparkContext, you can input a dataset and parallelize the data across a cluster (since you are currently using Spark in local mode on a single machine, technically the dataset isn't distributed yet).Run the code cell below to instantiate a SparkContext object and then read in the log_of_songs list into Spark.
###Code
###
# You might have noticed this code in the screencast.
#
# import findspark
# findspark.init('spark-2.3.2-bin-hadoop2.7')
#
# The findspark Python module makes it easier to install
# Spark in local mode on your computer. This is convenient
# for practicing Spark syntax locally.
# However, the workspaces already have Spark installed and you do not
# need to use the findspark module
#
###
import findspark
findspark.init()
import pyspark
sc = pyspark.SparkContext(appName="maps_and_lazy_evaluation_example")
log_of_songs = [
"Despacito",
"Nice for what",
"No tears left to cry",
"Despacito",
"Havana",
"In my feelings",
"Nice for what",
"despacito",
"All the stars"
]
# parallelize the log_of_songs to use with Spark
distributed_song_log = sc.parallelize(log_of_songs)
###Output
_____no_output_____
###Markdown
This next code cell defines a function that converts a song title to lowercase. Then there is an example converting the word "Havana" to "havana".
###Code
def convert_song_to_lowercase(song):
return song.lower()
convert_song_to_lowercase("Havana")
###Output
_____no_output_____
###Markdown
The following code cells demonstrate how to apply this function using a map step. The map step will go through each song in the list and apply the convert_song_to_lowercase() function.
###Code
distributed_song_log.map(convert_song_to_lowercase)
###Output
_____no_output_____
###Markdown
You'll notice that this code cell ran quite quickly. This is because of lazy evaluation. **Spark does not actually execute the map step unless it needs to**."RDD" in the output refers to resilient distributed dataset. RDDs are exactly what they say they are: fault-tolerant datasets distributed across a cluster. This is how Spark stores data. To get Spark to actually run the map step, you need to use an "action". One available action is the collect method. The collect() method takes the results from all of the clusters and "collects" them into a single list on the master node.
###Code
distributed_song_log.map(convert_song_to_lowercase).collect()
###Output
_____no_output_____
###Markdown
Note as well that Spark is not changing the original data set: Spark is merely making a copy. You can see this by running collect() on the original dataset.
###Code
distributed_song_log.collect()
###Output
_____no_output_____
###Markdown
You do not always have to write a custom function for the map step. You can also use anonymous (lambda) functions as well as built-in Python functions like string.lower(). Anonymous functions are actually a Python feature for writing functional style programs.
###Code
distributed_song_log.map(lambda song: song.lower()).collect()
distributed_song_log.map(lambda x: x.lower()).collect()
###Output
_____no_output_____ |
003 - Machine Learing/.ipynb_checkpoints/LinearRegression_Diabetes-checkpoint.ipynb | ###Markdown
Comparando Real e Predito
###Code
df = pd.DataFrame({'Real': y_valid, 'Predito': predictions}).head(50)
df.plot(kind='bar',figsize=(20,8))
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
###Output
_____no_output_____
###Markdown
Redução para Visualização
###Code
from sklearn.decomposition import PCA
pca_diabetes = PCA(n_components=2)
principalComponents_diabetes = pca_diabetes.fit_transform(X_valid)
principal_diabetes_Df = pd.DataFrame(data = principalComponents_diabetes
, columns = ['principal component 1', 'principal component 2'])
principal_diabetes_Df['y'] = y_valid
principal_diabetes_Df['predicts'] = predictions
import seaborn as sns
plt.figure(figsize=(16,10))
sns.scatterplot(
x="principal component 1", y="principal component 2",
hue="y",
data=principal_diabetes_Df,
alpha=0.3
)
%matplotlib inline
# Plot outputs
plt.scatter(x="principal component 1", y="y", color="black", data=principal_diabetes_Df)
plt.scatter(x="principal component 1", y="predicts", color="green", data=principal_diabetes_Df)
plt.xticks(())
plt.yticks(())
plt.show()
%matplotlib inline
# Plot outputs
plt.scatter(x="principal component 2", y="y", color='black', linewidths=3, data=principal_diabetes_Df)
plt.scatter(x="principal component 2", y="predicts", color='blue', data=principal_diabetes_Df)
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____ |
ML_Ensemble_Learning.ipynb | ###Markdown
Ensemble Learning and Cross ValidationKey Terms: XGBoost, AdaBoost, KFold, **Kapil Nagwanshi**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = (18,10)
from sklearn import model_selection
from sklearn.ensemble import AdaBoostClassifier
from google.colab import drive
drive.mount('/content/gdrive')
cd gdrive
cd 'My Drive'
cd 'Colab Notebooks'
df = pd.read_csv('pima-indians-diabetes.csv')
df.head()
array = df.values
X = array[:,0:8]
Y = array[:,8]
seed =7
num_trees = 30
kfold = model_selection.KFold(n_splits=10,random_state=seed)
model = AdaBoostClassifier(n_estimators=num_trees,random_state=seed)
results = model_selection.cross_val_score(model,X,Y,cv=kfold)
print(results)
print(results.mean())
from sklearn import svm
from xgboost import XGBClassifier
clf = XGBClassifier()
seed =7
num_trees = 30
kfold = model_selection.KFold(n_splits=10,random_state=seed)
model = XGBClassifier(n_estimators=num_trees,random_state=seed)
results = model_selection.cross_val_score(model,X,Y,cv=kfold)
print(results)
print('---------------------')
print(results.mean())
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:296: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True.
FutureWarning
###Markdown
Cross Validation
###Code
from sklearn.datasets import load_iris
iris_data = load_iris()
print(iris_data)
data_input = iris_data.data
data_output = iris_data.target
print(data_input)
print('-------------------------------')
print(data_output)
from sklearn.model_selection import KFold
kf = KFold (n_splits =5, shuffle = True)
print('Train Set Test Set ')
for train_set, test_set in kf.split(data_input):
print(train_set,test_set)
from sklearn.ensemble import RandomForestClassifier
rf_class = RandomForestClassifier(n_estimators=10)
from sklearn.model_selection import cross_val_score
print(cross_val_score(rf_class,data_input, data_output, scoring='accuracy',cv=10))
accuracy = cross_val_score(rf_class,data_input, data_output, scoring='accuracy',cv=10).mean()*100
print('Accuracy of Random Forest is: ', accuracy, '%')
###Output
_____no_output_____ |
nodes/equilibration.ipynb | ###Markdown
Author: Lester HedgesEmail: [email protected] EquilibrationA node to perform equilibration of a molecular system.
###Code
import BioSimSpace as BSS
node = BSS.Gateway.Node("A node to perform equilibration and save the equlibrated molecular configuration to file.")
node.addAuthor(name="Lester Hedges", email="[email protected]", affiliation="University of Bristol")
node.setLicense("GPLv3")
###Output
_____no_output_____
###Markdown
Set the input requirements:
###Code
node.addInput("files", BSS.Gateway.FileSet(help="A set of molecular input files."))
node.addInput("runtime", BSS.Gateway.Time(help="The run time.",
unit="nanoseconds",
minimum=0*BSS.Units.Time.nanosecond,
maximum=10*BSS.Units.Time.nanosecond,
default=0.2*BSS.Units.Time.nanosecond))
node.addInput("temperature_start", BSS.Gateway.Temperature(help="The initial temperature.",
unit="kelvin",
minimum=0*BSS.Units.Temperature.kelvin,
maximum=1000*BSS.Units.Temperature.kelvin,
default=0*BSS.Units.Temperature.kelvin))
node.addInput("temperature_end", BSS.Gateway.Temperature(help="The final temperature.",
unit="kelvin",
minimum=0*BSS.Units.Temperature.kelvin,
maximum=1000*BSS.Units.Temperature.kelvin,
default=300*BSS.Units.Temperature.kelvin))
node.addInput("restraint", BSS.Gateway.String(help="The type of restraint to use.",
allowed=["None"] + BSS.Protocol.Equilibration.restraints(), default="None"))
###Output
_____no_output_____
###Markdown
We now need to define the output of the node. In this case we will return a set of files representing the equilibrated molecular system.
###Code
node.addOutput("equilibrated", BSS.Gateway.FileSet(help="The equilibrated molecular system."))
###Output
_____no_output_____
###Markdown
If needed, here are some input files again. These can then be re-uploaded using the GUI.AMBER files: [ala.crd](../input/ala.crd), [ala.top](../input/ala.top)GROMACS: [kigaki.gro](https://raw.githubusercontent.com/michellab/BioSimSpace/devel/demo/gromacs/kigaki/kigaki.gro), [kigaki.top](https://raw.githubusercontent.com/michellab/BioSimSpace/devel/demo/gromacs/kigaki/kigaki.top)Now show the GUI.
###Code
node.showControls()
###Output
_____no_output_____
###Markdown
Generate the molecular system.
###Code
system = BSS.IO.readMolecules(node.getInput("files"))
###Output
_____no_output_____
###Markdown
Set up the equilibration protocol.(Note that the keyword arguments happen to have the same name as the input requirements. This need not be the case.)
###Code
protocol = BSS.Protocol.Equilibration(runtime=node.getInput("runtime"), temperature_start=node.getInput("temperature_start"), temperature_end=node.getInput("temperature_end"), restraint=node.getInput("restraint"))
###Output
_____no_output_____
###Markdown
Start the MD equilibration.
###Code
process = BSS.MD.run(system, protocol)
###Output
_____no_output_____
###Markdown
Get the equilibrated molecular system and write to file in the same format as the input.
###Code
node.setOutput("equilibrated", BSS.IO.saveMolecules("equilibrated", process.getSystem(block=True), system.fileFormat()))
###Output
_____no_output_____
###Markdown
Validate the node.
###Code
node.validate()
###Output
_____no_output_____ |
Chapter 06 Model Evaluation and Hyperparameter Tuning.ipynb | ###Markdown
Using K-fold cross validation to assess model performance Holdout methodSplit into different training and test sets. Interested in tuning and comparing parameters. Instead of model tuning on the test set, can make a training set, validation set, and test set. Training set is used to test different models, and performance on validation set is used for model selection. Final performance metric comes on the test set.Splitting the validation set into k-folds reduces sampling bias. In k-fold cross validation we randomly split the training data into k folds without replacement- where k-1 folds are used for the model training and one isused for testing. Procedure is repeated k times so that we obtain k models and performance estimates. Average out the perfomance to get a better estimate of the model performance. After finidng optimal hyperparameter values, can retrain using whole training set and get a final performance estimate using an independent test set. k= 10 is generally a good number but with small training sets will want to increase the number of foldsCan use Leave-one-out cross validation when working with very small datasets. Stratified cross validation can be a better technique where class proportions are preserved in each fold to ensure that each fold is representativeof the training set.
###Code
import numpy as np
from sklearn.cross_validation import StratifiedKFold
kfold = StratifiedKFold(y=y_train, n_folds = 10, random_state = 1)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_lr.fit(X_train[train], y_train[train])
score = pipe_lr.score(X_train[test], y_train[test])
scores.append(score)
print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1, np.bincount(y_train[train]), score))
print('CV accuracy: %.3f +/- %.3f' % ( np.mean(scores), np.std(scores)))
### Uses Sklearn directly to fit and score
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator = pipe_lr,
X = X_train,
y = y_train,
cv = 10,
n_jobs = 1)
print ('CV accuracy scores: %s' % scores)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
###Output
CV accuracy scores: [ 0.89130435 0.97826087 0.97826087 0.91304348 0.93478261 0.97777778
0.93333333 0.95555556 0.97777778 0.95555556]
CV accuracy: 0.950 +/- 0.029
###Markdown
Additional CV resources M. Markatou, H. Tian, S. Biswas, and G. M. Hripcsak. Analysis of Variance of Cross-validation Estimators of the Generalization Error. Journal of Machine Learning Research, 6:1127–1168, 2005)B. Efron andR. Tibshirani. Improvements on Cross-validation: The 632+ Bootstrap Method. Journal of the American Statistical Association, 92(438):548–560, 1997 Discovering bias and variance problems with learning curves
###Code
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
pipe_lr = Pipeline([
('scl', StandardScaler()),
('clf', LogisticRegression(penalty='l2', random_state = 0))
])
train_sizes, train_scores, test_scores = learning_curve(estimator = pipe_lr,
X=X_train,
y=y_train,
train_sizes= np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis = 1)
plt.plot(train_sizes, train_mean, color = 'blue', marker='o',
markersize=5, label = 'training accuracy')
plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean, color = 'green', linestyle='--',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha = 0.15, color = 'green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.show()
###Output
/Users/andrew.moskowitz/anaconda2/lib/python2.7/site-packages/sklearn/learning_curve.py:22: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the functions are moved. This module will be removed in 0.20
DeprecationWarning)
###Markdown
Addressing overfitting and underfitting with validation curves
###Code
# Similar to learning curves, but focused on model parameters
# e.g., the inverse regularization parameter in logistic regression
from sklearn.model_selection import validation_curve
param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]
train_scores, test_scores = validation_curve(
estimator = pipe_lr,
X=X_train,
y = y_train,
param_name = 'clf__C',
param_range = param_range,
cv = 10)
train_mean = np.mean(train_scores, axis=1)
train_sd = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean, color ='blue', marker = 'o', markersize = 5, label = 'training accuracy')
plt.fill_between(param_range, train_mean + train_sd, train_mean - train_sd, alpha = 0.15, color = 'blue')
plt.plot(param_range, test_mean, color = 'green', linestyle='--', marker = 's', markersize = 5, label='validation accuracy')
plt.fill_between(param_range, test_mean + test_std, test_mean - test_std,
alpha=0.15, color = 'green')
plt.grid()
plt.xscale('log')
plt.legend(loc = 'lower right')
plt.xlabel('Parameter C')
plt.ylabel('Accuracy')
plt.ylim([0.8, 1.0])
plt.show()
###Output
_____no_output_____
###Markdown
Fine Tuning using GridSearchGrid search is a brute force exhaustive search paradigm in which we test
###Code
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))])
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
param_grid = [{'clf__C': param_range,
'clf__kernel' : ['linear']},
{'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel':['rbf']}]
gs = GridSearchCV(estimator = pipe_svc, param_grid=param_grid,
scoring='accuracy',
cv=10, n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print gs.best_params_
## use best esimator on the test set
clf = gs.best_estimator_
clf.fit(X_train, y_train)
print('Test accuracy: %.3f' % clf.score(X_test, y_test))
###Output
Test accuracy: 0.965
###Markdown
Can and should (when parameter space is high dimensional) use RandomizedSearchCV Algorithm selection using nested cross validation NEsted cross validation- outer loop to split data into training and test folds- inner loop to select model using kfold cross validation on the training fold - after model selection the test fold is used to evaluate model performance
###Code
gs = GridSearchCV(estimator = pipe_svc,
param_grid=param_grid,
scoring = 'accuracy',
cv = 2,
n_jobs = -1)
scores = cross_val_score(gs, X_train, y_train, scoring = 'accuracy', cv = 5)
print ('CV accuracy: %.3f +/- %.3f' %( np.mean(scores), np.std(scores)))
from sklearn.tree import DecisionTreeClassifier
gs = GridSearchCV(estimator = DecisionTreeClassifier(random_state=0),
param_grid=[
{'max_depth': [1,2,3,4,5,6,7,None]}
],
scoring='accuracy',
cv = 5)
scores = cross_val_score(gs,
X_train,
y_train,
scoring='accuracy',
cv = 2)
print('CV accuracy: %.3f +/- %.3f' % (
np.mean(scores), np.std(scores)))
###Output
CV accuracy: 0.906 +/- 0.015
###Markdown
Looking at different performance evaluation metrics Accuracy is a good metric to evaluate models, but precision, recall, and F1-score are also used to measure a model's relevanceThe confusion matrix looks like this: Predicted Class P . | N . ------------------- Act class P | True Pos| False Neg| |--------------------| N |False Pos| True Neg | |--------------------| Can use the confuision_matrix method to build matrix
###Code
from sklearn.metrics import confusion_matrix
pipe_svc.fit(X_train, y_train)
y_pred = pipe_svc.predict(X_test)
confmat = confusion_matrix(y_true = y_test, y_pred = y_pred)
print(confmat)
fig, ax = plt.subplots(figsize=(2.5, 2.5))
ax.matshow(confmat, cmap = plt.cm.Blues, alpha=0.3)
for i in range(confmat.shape[0]):
for j in range(confmat.shape[1]):
ax.text(x=j, y=i,
s=confmat[i, j],
va = 'center', ha = 'center')
plt.xlabel('predicted label')
plt.ylabel('true label')
plt.show()
###Output
_____no_output_____
###Markdown
Optimize precision and recall of classification modelBoth prediction error (ERR) and accuracy (ACC) are good to determine how many samples have been misclassified. Error = FP + FN / All samples (falses over everything) Accuracy = TP + TN / All samples (trues over everything) = 1-ERRTrue positive rate and false positive rate (Good for imbalanced class problems)FPR = FP/N = FP/(FP + TN) TPR = TP/P = TP/(FN + TPPrecision (PRE) and Recall (REC) are realted to FPR and TPRPRE = TP / (TP + FP) REC = TPR = TP / P = TP / (FN + TP)In practice a combination of Precisiona nd recall is used: the F1-ScoreF1 = 2(PRE x REC)/(PRE + REC)Can import all from SK-learn and use all int he optimization for the grid search
###Code
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score, f1_score
print('Precision: %.3f' % precision_score(y_true=y_test, y_pred = y_pred))
print('Recall: %.3f' % recall_score(y_true=y_test, y_pred=y_pred))
print('F1: %.3f' % f1_score(y_true=y_test, y_pred=y_pred))
### Can construct our own positive class label by creating custom
### scorer function
from sklearn.metrics import make_scorer, f1_score
scorer = make_scorer(f1_score, pos_label = 0)
gs = GridSearchCV(estimator=pipe_svc,
param_grid = param_grid,
scoring = scorer,
cv=10)
gs
###Output
_____no_output_____
###Markdown
Plotting the Receiver operating Characteristic (ROC)ROC plot performance ERT to the false positive and true positive rates. Diagonal of ROC is random guessing, and models below the diagonal are worse than chancePerfect classifier would fall into the top-left corner of the graph with a true positive rate of 1 and a false positive rate of 0. Based on the ROC curve, can compute the area-under-the-curve (AUC)(Similar ROC curves, can compute precision-recall curves for different probability thresholds).
###Code
from sklearn.metrics import roc_curve, auc
from scipy import interp
pipe_lr = Pipeline([('scl', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(penalty='l2',
random_state=0,
C = 100.0))])
X_train2 = X_train[:, [4, 14]]
cv = StratifiedKFold(y_train, n_folds = 3, random_state=1)
fig = plt.figure(figsize=(7, 5))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train, test) in enumerate(cv):
probas = pipe_lr.fit(X_train2[train],
y_train[train]).predict_proba(X_train2[test])
fpr,tpr, thresholds = roc_curve(y_train[test],
probas[:, 1],
pos_label=1)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label = 'ROC fold %d (area = %0.2f)' % (i+1, roc_auc))
plt.plot([0,1], [0,1],
linestyle = '--',
color = (0.6, 0.6, 0.6),
label = 'random guessing')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',
label = 'mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0,0,1],
[0,1,1],
lw=2,
linestyle = ':',
color = 'black',
label = 'perfect performance')
plt.xlim([-.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.title('ROC')
plt.legend(loc = "Lower right")
plt.show()
pipe_lr = pipe_lr.fit(X_train2, y_train)
y_pred2 = pipe_lr.predict(X_test[:, [4,14]])
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
print('ROC AUC: %.3f' % roc_auc_score(y_true=y_test, y_score=y_pred2))
print('Accuracy: %.3f' % accuracy_score(y_true = y_test, y_pred=y_pred2))
###Output
ROC AUC: 0.662
Accuracy: 0.711
###Markdown
Scoring Metrics for Multiclass classificaiton Can use macro and micro averaging mdethods to extend scoring to multiclass problems via one vs all classificationMicroaveraging is calculate from the inidividual TP, TN, FP, FNs. Micro avg of the precision score of a k-class system would be: TP_1 + ... + TP_k ----------------------------------- TP1 + ... TP_k + FP_1 + ... + FP_k The Macro average is simple calculated as the average scores of the different systems PRE_Macro = PRE_1 + ... + PRE_k -------------------- k Micro-averaging is useful if we want to weight each instance or prediction equally, whereas macro averages weight all classes equally to evaluate overall performance of a classifier with regard to the most frequent class labels.
###Code
pre_scorer = make_scorer(score_func = precision_score,
pos_label = 1,
greater_is_better = True,
average='micro')
###Output
_____no_output_____ |
Projet/Projet.ipynb | ###Markdown
Projet Idée 1 : Démineur
###Code
import numpy as np
import random as rd
def initializationRealGrid(n,h,l,x,y): #n nombre de bombes, h hauteur de la grille, l largeur de la grille
grid = np.zeros([h,l], dtype="int")
#positionnement des bombes
i = 0
while i < n:
xb = rd.randint(0,l-1)
yb = rd.randint(0,h-1)
if (xb==x & yb==y)|(xb==x & yb==y-1)|(xb==x & yb==y+1)|(xb==x-1 & yb==y)|(xb==x-1 & yb==y-1)|(xb==x-1 & yb==y+1)|(xb==x+1 & yb==y)|(xb==x+1 & yb==y-1)|(xb==x+1 & yb==y+1):
i=i
elif grid[yb,xb] != 9:
grid[yb,xb]=9
i+=1
print(grid)
#écriture des chiffres
#première ligne
nb=0
if grid[0,0]!=9:
if grid[0,1]==9:
nb+=1
if grid[1,1]==9:
nb+=1
if grid[1,0]==9:
nb+=1
grid[0,0]=nb
nb=0
if grid[0,l-1]!=9:
if grid[0,l-2]==9:
nb+=1
if grid[1,l-2]==9:
nb+=1
if grid[1,l-1]==9:
nb+=1
grid[0,l-1]=nb
for j in range(1,l-1):
if grid[0,j]!=9:
nb=0
if grid[0,j-1]==9:
nb+=1
if grid[0,j+1]==9:
nb+=1
if grid[1,j-1]==9:
nb+=1
if grid[1,j]==9:
nb+=1
if grid[1,j+1]==9:
nb+=1
grid[0,j]=nb
#dernière ligne
nb=0
if grid[h-1,0]!=9:
if grid[h-1,1]==9:
nb+=1
if grid[h-2,1]==9:
nb+=1
if grid[h-2,0]==9:
nb+=1
grid[h-1,0]=nb
nb=0
if grid[h-1,l-1]!=9:
if grid[h-1,l-2]==9:
nb+=1
if grid[h-2,l-2]==9:
nb+=1
if grid[h-2,l-1]==9:
nb+=1
grid[h-1,l-1]=nb
for j in range(1,l-1):
if grid[h-1,j]!=9:
nb=0
if grid[h-1,j-1]==9:
nb+=1
if grid[h-1,j+1]==9:
nb+=1
if grid[h-2,j-1]==9:
nb+=1
if grid[h-2,j]==9:
nb+=1
if grid[h-2,j+1]==9:
nb+=1
grid[h-1,j]=nb
#autres lignes
for i in range (1,h-1):
nb=0
if grid[i,0]!=9:
if grid[i-1,0]==9:
nb+=1
if grid[i-1,1]==9:
nb+=1
if grid[i,1]==9:
nb+=1
if grid[i+1,1]==9:
nb+=1
if grid[i+1,0]==9:
nb+=1
grid[i,0]=nb
nb=0
if grid[i,l-1]!=9:
if grid[i-1,l-1]==9:
nb+=1
if grid[i-1,l-2]==9:
nb+=1
if grid[i,l-2]==9:
nb+=1
if grid[i+1,l-2]==9:
nb+=1
if grid[i+1,l-1]==9:
nb+=1
grid[i,l-1]=nb
for j in range(1,l-1):
nb=0
if grid[i,j] !=9:
if grid[i-1,j-1]==9:
nb+=1
if grid[i-1,j]==9:
nb+=1
if grid[i-1,j+1]==9:
nb+=1
if grid[i,j-1]==9:
nb+=1
if grid[i,j+1]==9:
nb+=1
if grid[i+1,j-1]==9:
nb+=1
if grid[i+1,j]==9:
nb+=1
if grid[i+1,j+1]==9:
nb+=1
grid[i,j]=nb
return grid
def initializationGamerGrid(h,l):
grid
grid = [["_|" for j in range (l)]for i in range(h)]
return grid
print(initializationGamerGrid(2,3))
grid = np.array([2,3])
for i in range(2):
for j in range(3):
grid[i,j]='_'
###Output
_____no_output_____ |
xinetzone/docs/topic/vta/tutorials/vta_get_started.ipynb | ###Markdown
(vta-get-started)= VTA 入门**原作者**: [Thierry Moreau](https://homes.cs.washington.edu/~moreau/)这是关于如何使用 TVM 编程 VTA 设计的入门教程。在本教程中,将演示在 VTA 设计的向量 ALU 上实现向量加法的基本 TVM 工作流。此过程包括将计算 lower 到低级加速器运算所需的特定调度变换。首先,需要导入 TVM,这是深度学习优化编译器。还需要导入 VTA python 包,其中包含针对 TVM 的 VTA 特定扩展,以实现 VTA 设计。
###Code
import os
import tvm
from tvm import te
import vta
import numpy as np
###Output
_____no_output_____
###Markdown
加载 VTA 参数VTA 是模块化和可定制的设计。因此,用户可以自由地修改影响硬件设计布局的高级硬件参数。这些参数在 `tvm/3rdparty/vta-hw/config/vta_config.json` 中通过它们的 `log2` 值指定。这些 VTA 参数可以通过 `vta.get_env` 函数加载。最后,TVM 目标也在 `vta_config.json` 文件中指定。当设置为 `sim` 时,执行将发生在 VTA 仿真器行为内。如果您想在 Pynq FPGA 开发平台上运行本教程,请遵循 *VTA 基于 Pynq 的测试设置* 指南。
###Code
env = vta.get_env()
###Output
_____no_output_____
###Markdown
FPGA 编程当针对 Pynq FPGA 开发板时,需要使用 VTA bitstream 配置该板。需要 TVM RPC 模块和 VTA 仿真器模块:
###Code
from tvm import rpc
from tvm.contrib import utils
from vta.testing import simulator # 此处一定要有
###Output
_____no_output_____
###Markdown
```{warning}若 vta 是 `sim` 模式,一定要载入 `simulator` 模块,否则会触发异常。```从操作系统环境中读取 Pynq RPC 主机 IP 地址和端口号:
###Code
host = os.environ.get("VTA_RPC_HOST", "192.168.2.99")
port = int(os.environ.get("VTA_RPC_PORT", "9091"))
###Output
_____no_output_____
###Markdown
在 Pynq 上配置 bitstream 和运行时系统,以匹配 `vta_config.json` 文件指定的 VTA 配置。
###Code
if env.TARGET in ["pynq", "de10nano"]:
# 确保使用 RPC=1 编译 TVM
assert tvm.runtime.enabled("rpc")
remote = rpc.connect(host, port)
# 重新配置 JIT runtime
vta.reconfig_runtime(remote)
# 用预编译的 VTA bitstream 编程 FPGA。
# 通过将 path 传递给 bitstream 文件而不是 None,
# 您可以使用自定义 bitstream 编程 FPGA。
vta.program_fpga(remote, bitstream=None)
# 在仿真模式中,在本地托管 RPC 服务器。
elif env.TARGET in ("sim", "tsim", "intelfocl"):
remote = rpc.LocalSession()
if env.TARGET in ["intelfocl"]:
# program intelfocl aocx
vta.program_fpga(remote, bitstream="vta.bitstream")
###Output
_____no_output_____
###Markdown
计算声明第一步,需要描述计算。TVM 采用张量语义,每个中间结果表示为多维数组。用户需要描述生成输出张量的计算规则。在这个例子中,描述了向量加法,它需要多个计算阶段,如下面的数据流程图所示。- 首先,描述存在于 main memory 中的输入张量 `A` 和 `B`。- 其次,需要声明中间张量 `A_buf` 和 `B_buf`,它们将位于 VTA 的 on-chip buffers 中。有了这个额外的计算阶段,就可以显式地分阶段进行 cached 的读写操作。- 第三,描述向量加法运算,它将 `A_buf` 添加到 `B_buf` 以生成 `C_buf`。- 最后的运算是强制转换并复制回 DRAM,到结果张量 `C` 中。```{image} images/vadd_dataflow.png:align: center``` Input 占位符以平铺(tiled)数据格式描述占位符张量 `A` 和 `B`,以匹配 VTA 向量 ALU 施加的数据布局要求。对于 VTA 的一般用途的运算,如向量加法,tile 大小为 `(env.BATCH, env.BLOCK_OUT)`。维度在 `vta_config.json` 配置文件中指定,默认设置为 (1, 16) 向量。
###Code
# 输出通道因子 m -总共 64 x 16 = 1024 输出通道
m = 64
# Batch 因子 o - 总共 1 x 1 = 1
o = 1
# VTA 向量数据 shape
shape = (o, m, env.BATCH, env.BLOCK_OUT)
###Output
_____no_output_____
###Markdown
查看 `shape`:
###Code
shape
###Output
_____no_output_____
###Markdown
查看 {data}`env.acc_dtype` 和 {data}`env.inp_dtype`:
###Code
env.acc_dtype, env.inp_dtype
###Output
_____no_output_____
###Markdown
此外,A 和 B 的数据类型也需要匹配 `env.acc_dtype`,由 `vta_config.json` 文件设置为 32 位整型。
###Code
# 平铺数据格式的 A 占位符张量
A = te.placeholder(shape, name="A", dtype=env.acc_dtype)
# 平铺数据格式的 B 占位符张量
B = te.placeholder(shape, name="B", dtype=env.acc_dtype)
###Output
_____no_output_____
###Markdown
查看张量 `A`:
###Code
A
###Output
_____no_output_____
###Markdown
Copy Buffers硬件加速器的特点之一是,必须对 on-chip memory 进行显式管理。这意味着需要描述中间张量 `A_buf` 和 `B_buf`,它们可以具有与原始占位符张量 `A` 和 `B` 不同的内存作用域。稍后在调度阶段,可以告诉编译器 `A_buf` 和 `B_buf` 将存在于 VTA 的 on-chip buffer(SRAM)中,而 `A` 和 `B` 将存在于 main memory(DRAM)中。将 A_buf 和 B_buf 描述为恒等函数计算的运算结果。这可以被编译器解释为 cached 的读运算。
###Code
# A copy buffer
A_buf = te.compute(shape, lambda *i: A(*i), "A_buf")
# B copy buffer
B_buf = te.compute(shape, lambda *i: B(*i), "B_buf")
A_buf
###Output
_____no_output_____
###Markdown
向量加法现在可以用另一个 `compute` 运算来描述向量加法结果张量 `C`。`compute` 函数采用张量的形状,以及描述张量每个位置的计算规则的 lambda 函数。此阶段没有计算发生,因为只是声明了计算应该如何完成。
###Code
# 描述 VTA 中的向量加法
fcompute = lambda *i: A_buf(*i).astype(env.acc_dtype) + B_buf(*i).astype(env.acc_dtype)
C_buf = te.compute(shape, fcompute, name="C_buf")
###Output
_____no_output_____
###Markdown
Casting 结果计算完成后,需要将 VTA 计算的结果发送回主存储器(main memory)```{admonition} 内存存储限制:class: alert alert-infoVTA 的特点之一是,它只支持窄化(narrow) `env.inp_dtype` 数据类型格式的 DRAM 存储。这让我们能够减少内存传输的数据 footprint(详见基本矩阵乘法的例子)。```对窄化的输入激活数据格式执行最后一个 typecast 运算。
###Code
# 转换为输出类型,并发送到 main memory
fcompute = lambda *i: C_buf(*i).astype(env.inp_dtype)
C = te.compute(shape, fcompute, name="C")
###Output
_____no_output_____
###Markdown
这就结束了本教程的计算声明部分。 调度计算虽然上面的几行描述了计算规则,但我们可以通过许多方式得到 `C`。TVM 要求用户提供一种名为 **调度** (*schedule*) 的计算实现。调度是对原始计算的一组变换,它在不影响正确性的情况下变换计算的实现。这个简单的 VTA 编程教程旨在演示基本的调度变换,将原始调度映射到 VTA 硬件原语(primitives)。 默认调度在构造了调度之后,默认情况下,调度会以如下方式计算 `C`:
###Code
s = te.create_schedule(C.op)
# 查看生成的调度
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(int32), int32, [1024], []),
B: Buffer(B_2: Pointer(int32), int32, [1024], []),
C: Buffer(C_2: Pointer(int8), int8, [1024], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, int32, [1, 64, 1, 16], []), B_1: B_3: Buffer(B_2, int32, [1, 64, 1, 16], []), C_1: C_3: Buffer(C_2, int8, [1, 64, 1, 16], [])} {
allocate(A_buf: Pointer(global int32), int32, [1024]), storage_scope = global;
allocate(B_buf: Pointer(global int32), int32, [1024]), storage_scope = global {
for (i1: int32, 0, 64) {
for (i3: int32, 0, 16) {
let cse_var_1: int32 = ((i1*16) + i3)
A_buf_1: Buffer(A_buf, int32, [1024], [])[cse_var_1] = A[cse_var_1]
}
}
for (i1_1: int32, 0, 64) {
for (i3_1: int32, 0, 16) {
let cse_var_2: int32 = ((i1_1*16) + i3_1)
B_buf_1: Buffer(B_buf, int32, [1024], [])[cse_var_2] = B[cse_var_2]
}
}
for (i1_2: int32, 0, 64) {
for (i3_2: int32, 0, 16) {
let cse_var_3: int32 = ((i1_2*16) + i3_2)
A_buf_2: Buffer(A_buf, int32, [1024], [])[cse_var_3] = (A_buf_1[cse_var_3] + B_buf_1[cse_var_3])
}
}
for (i1_3: int32, 0, 64) {
for (i3_3: int32, 0, 16) {
let cse_var_4: int32 = ((i1_3*16) + i3_3)
C[cse_var_4] = cast(int8, A_buf_2[cse_var_4])
}
}
}
}
###Markdown
虽然此调度是合理的,但它不会编译到 VTA。为了获得正确的代码生成(code generation),需要应用调度原语(scheduling primitives)和代码注解(code annotation),将调度变换为可以直接 lower 到 VTA 硬件 intrinsics。其中包括:- DMA copy 运算将把全局作用域的张量复制到局部作用域的张量。- 执行向量加法的向量 ALU 运算。 Buffer 作用域首先,设置复制 buffer 的作用域,以指示 TVM 这些中间张量将存储在 VTA 的 on-chip SRAM buffer 中。下面,告诉 TVM `A_buf`、`B_buf`、`C_buf` 将存在于 VTA 的 on-chip *accumulator buffer* 中,该 buffer 作为 VTA 的通用寄存器(register)文件。将中间张量的作用域设置为 VTA 的 on-chip accumulator buffer
###Code
s[A_buf].set_scope(env.acc_scope)
s[B_buf].set_scope(env.acc_scope)
s[C_buf].set_scope(env.acc_scope)
###Output
_____no_output_____
###Markdown
DMA 传输需要调度 DMA 传输,以便将存储在 DRAM 中的数据在 VTA 片上 buffer 之间来回移动。插入 `dma_copy` pragmas 来告诉编译器,复制运算将通过 DMA 批量执行,这在硬件加速器中很常见。使用 DMA pragma 标记 buffer 副本,将复制循环映射到 DMA transfer 运算:
###Code
s[A_buf].pragma(s[A_buf].op.axis[0], env.dma_copy)
s[B_buf].pragma(s[B_buf].op.axis[0], env.dma_copy)
s[C].pragma(s[C].op.axis[0], env.dma_copy)
###Output
_____no_output_____
###Markdown
ALU 运算VTA 有向量 ALU,可以在累加器 buffer 中对张量执行向量运算。为了告诉 TVM 给定的运算需要映射到 VTA 的 vector ALU,需要显式地用 `env.alu` pragma 标记 vector 加法循环。告诉 TVM 计算需要在 VTA 的向量 ALU 上执行:
###Code
s[C_buf].pragma(C_buf.op.axis[0], env.alu)
# 查看最终的 schedule
print(vta.lower(s, [A, B, C], simple_mode=True))
###Output
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(int32), int32, [1024], []),
B: Buffer(B_2: Pointer(int32), int32, [1024], []),
C: Buffer(C_2: Pointer(int8), int8, [1024], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, int32, [1, 64, 1, 16], []), B_1: B_3: Buffer(B_2, int32, [1, 64, 1, 16], []), C_1: C_3: Buffer(C_2, int8, [1, 64, 1, 16], [])} {
attr [IterVar(vta: int32, (nullptr), "ThreadIndex", "vta")] "coproc_scope" = 2 {
@tir.call_extern("VTALoadBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), A_2, 0, 64, 1, 64, 0, 0, 0, 0, 0, 3, dtype=int32)
@tir.call_extern("VTALoadBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), B_2, 0, 64, 1, 64, 0, 0, 0, 0, 64, 3, dtype=int32)
attr [IterVar(vta, (nullptr), "ThreadIndex", "vta")] "coproc_uop_scope" = "VTAPushALUOp" {
@tir.call_extern("VTAUopLoopBegin", 64, 1, 1, 0, dtype=int32)
@tir.vta.uop_push(1, 0, 0, 64, 0, 2, 0, 0, dtype=int32)
@tir.call_extern("VTAUopLoopEnd", dtype=int32)
}
@tir.vta.coproc_dep_push(2, 3, dtype=int32)
}
attr [IterVar(vta, (nullptr), "ThreadIndex", "vta")] "coproc_scope" = 3 {
@tir.vta.coproc_dep_pop(2, 3, dtype=int32)
@tir.call_extern("VTAStoreBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), 0, 4, C_2, 0, 64, 1, 64, dtype=int32)
}
@tir.vta.coproc_sync(, dtype=int32)
}
###Markdown
这就结束了本教程的调度部分。 TVM 计算在完成指定调度之后,可以将它编译成 TVM 函数。默认情况下,TVM 编译成可以直接从 python 调用的类型消除(type-erased)函数。在下面一行中,使用 {func}`tvm.build` 来创建函数。`build` 函数接受调度、函数的期望签名(包括输入和输出)以及想要编译的目标语言。
###Code
env.target_host
# ctx = tvm.target.Target("ext_dev", host=env.target_host)
target = "ext_dev"
my_vadd = vta.build(s, [A, B, C], target=target, name="my_vadd")
###Output
_____no_output_____
###Markdown
保存 ModuleTVM 把模块保存到文件中,这样以后就可以加载回来了。这被称为提前编译(ahead-of-time compilation),可以节省一些编译时间。更重要的是,这允许在开发机器上交叉编译可执行文件,并通过 RPC 将其发送到 Pynq FPGA 板上执行。将编译后的模块写入 object 文件。
###Code
temp = utils.tempdir()
my_vadd.save(temp.relpath("vadd.o"))
###Output
_____no_output_____
###Markdown
通过 RPC 发送可执行文件:
###Code
remote.upload(temp.relpath("vadd.o"))
###Output
_____no_output_____
###Markdown
载入 Module可以从文件系统加载编译后的模块来运行代码。
###Code
f = remote.load_module("vadd.o")
###Output
_____no_output_____
###Markdown
运行函数编译后的 TVM 函数使用简洁的 C API,可以被任何语言调用。TVM 用 python 提供了数组 API 来帮助快速测试和原型化。数组 API 是基于 [DLPack](https://github.com/dmlc/dlpack) 标准的。- 首先创建远程上下文(用于 Pynq 上的远程执行)。- 然后 `tvm.nd.array` 对数据进行相应的格式化。- `f()` 运行实际的计算。- `numpy()` 将结果数组以可解释的格式复制回来。随机初始化 A 和 B 数组,int 范围为 $(-128, 128]$:
###Code
size = o * env.BATCH, m * env.BLOCK_OUT
A_orig = np.random.randint(-128, 128, size=size).astype(A.dtype)
B_orig = np.random.randint(-128, 128, size=size).astype(B.dtype)
###Output
_____no_output_____
###Markdown
应用 packing 到 A 和 B 数组从 2D 到 4D packed layout:
###Code
A_packed = A_orig.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
B_packed = B_orig.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
###Output
_____no_output_____
###Markdown
获取远程设备的上下文:
###Code
ctx = remote.ext_dev(0)
###Output
_____no_output_____
###Markdown
使用 {func}`tvm.nd.array` 将输入/输出数组格式化为 DLPack 标准:
###Code
A_nd = tvm.nd.array(A_packed, ctx)
B_nd = tvm.nd.array(B_packed, ctx)
C_nd = tvm.nd.array(np.zeros((o, m, env.BATCH, env.BLOCK_OUT)).astype(C.dtype), ctx)
###Output
_____no_output_____
###Markdown
调用模块来执行计算:
###Code
f(A_nd, B_nd, C_nd)
###Output
_____no_output_____
###Markdown
验证 Correctness使用 `numpy` 计算引用的结果,并断言矩阵乘法的输出确实是正确的:
###Code
C_ref = (A_orig.astype(env.acc_dtype) + B_orig.astype(env.acc_dtype)).astype(C.dtype)
C_ref = C_ref.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
np.testing.assert_equal(C_ref, C_nd.numpy())
print("向量加法测试成功!")
###Output
向量加法测试成功!
###Markdown
(vta-get-started)= VTA 入门**原作者**: [Thierry Moreau](https://homes.cs.washington.edu/~moreau/)这是关于如何使用 TVM 编程 VTA 设计的入门教程。在本教程中,将演示在 VTA 设计的向量 ALU 上实现向量加法的基本 TVM 工作流。此过程包括将计算 lower 到低级加速器运算所需的特定调度变换。首先,需要导入 TVM,这是深度学习优化编译器。还需要导入 VTA python 包,其中包含针对 TVM 的 VTA 特定扩展,以实现 VTA 设计。
###Code
import os
import tvm
from tvm import te
import vta
import numpy as np
###Output
_____no_output_____
###Markdown
加载 VTA 参数VTA 是模块化和可定制的设计。因此,用户可以自由地修改影响硬件设计布局的高级硬件参数。这些参数在 `tvm/3rdparty/vta-hw/config/vta_config.json` 中通过它们的 `log2` 值指定。这些 VTA 参数可以通过 `vta.get_env` 函数加载。最后,TVM 目标也在 `vta_config.json` 文件中指定。当设置为 *sim* 时,执行将发生在 VTA 仿真器行为内。如果您想在 Pynq FPGA 开发平台上运行本教程,请遵循 *VTA 基于 Pynq 的测试设置* 指南。
###Code
env = vta.get_env()
###Output
_____no_output_____
###Markdown
FPGA 编程当针对 Pynq FPGA 开发板时,需要使用 VTA bitstream 配置该板。需要 TVM RPC 模块和 VTA 仿真器模块:
###Code
from tvm import rpc
from tvm.contrib import utils
from vta.testing import simulator # 此处一定要有
###Output
_____no_output_____
###Markdown
```{warning}若 vta 是 `sim` 模式,一定要载入 `simulator` 模块,否则会触发异常。```从操作系统环境中读取 Pynq RPC 主机 IP 地址和端口号:
###Code
host = os.environ.get("VTA_RPC_HOST", "192.168.2.99")
port = int(os.environ.get("VTA_RPC_PORT", "9091"))
###Output
_____no_output_____
###Markdown
在 Pynq 上配置 bitstream 和运行时系统,以匹配 `vta_config.json` 文件指定的 VTA 配置。
###Code
if env.TARGET == "pynq" or env.TARGET == "de10nano":
# 确保使用 RPC=1 编译 TVM
assert tvm.runtime.enabled("rpc")
remote = rpc.connect(host, port)
# 重新配置 JIT runtime
vta.reconfig_runtime(remote)
# 用预编译的 VTA bitstream 编程 FPGA。
# 通过将 path 传递给 bitstream 文件而不是 None,
# 您可以使用自定义 bitstream 编程 FPGA。
vta.program_fpga(remote, bitstream=None)
# 在仿真模式中,在本地托管 RPC 服务器。
elif env.TARGET in ("sim", "tsim", "intelfocl"):
remote = rpc.LocalSession()
if env.TARGET in ["intelfocl"]:
# program intelfocl aocx
vta.program_fpga(remote, bitstream="vta.bitstream")
###Output
_____no_output_____
###Markdown
计算声明第一步,需要描述计算。TVM 采用张量语义,每个中间结果表示为多维数组。用户需要描述生成输出张量的计算规则。在这个例子中,描述了向量加法,它需要多个计算阶段,如下面的数据流程图所示。- 首先,描述存在于 main memory 中的输入张量 `A` 和 `B`。- 其次,需要声明中间张量 `A_buf` 和 `B_buf`,它们将位于 VTA 的 on-chip buffers 中。有了这个额外的计算阶段,就可以显式地分阶段进行 cached 的读写操作。- 第三,描述向量加法运算,它将 `A_buf` 添加到 `B_buf` 以生成 `C_buf`。- 最后的运算是强制转换并复制回 DRAM,到结果张量 `C` 中。```{image} images/vadd_dataflow.png:align: center``` Input 占位符以平铺(tiled)数据格式描述占位符张量 `A` 和 `B`,以匹配 VTA 向量 ALU 施加的数据布局要求。对于 VTA 的一般用途的运算,如向量加法,tile 大小为 `(env.BATCH, env.BLOCK_OUT)`。维度在 `vta_config.json` 配置文件中指定,默认设置为 (1, 16) 向量。
###Code
# 输出通道因子 m -总共 64 x 16 = 1024 输出通道
m = 64
# Batch 因子 o - 总共 1 x 1 = 1
o = 1
# VTA 向量数据 shape
shape = (o, m, env.BATCH, env.BLOCK_OUT)
###Output
_____no_output_____
###Markdown
此外,A 和 B 的数据类型也需要匹配 `env.acc_dtype`,由 `vta_config.json` 文件设置为 32 位整型。
###Code
# 平铺数据格式的 A 占位符张量
A = te.placeholder(shape, name="A", dtype=env.acc_dtype)
# 平铺数据格式的 B 占位符张量
B = te.placeholder(shape, name="B", dtype=env.acc_dtype)
###Output
_____no_output_____
###Markdown
Copy Buffers硬件加速器的特点之一是,必须对 on-chip memory 进行显式管理。这意味着需要描述中间张量 `A_buf` 和 `B_buf`,它们可以具有与原始占位符张量 `A` 和 `B` 不同的内存作用域。稍后在调度阶段,可以告诉编译器 `A_buf` 和 `B_buf` 将存在于 VTA 的 on-chip buffer(SRAM)中,而 `A` 和 `B` 将存在于 main memory(DRAM)中。将 A_buf 和 B_buf 描述为恒等函数计算的运算结果。这可以被编译器解释为 cached 的读运算。
###Code
# A copy buffer
A_buf = te.compute(shape, lambda *i: A(*i), "A_buf")
# B copy buffer
B_buf = te.compute(shape, lambda *i: B(*i), "B_buf")
###Output
_____no_output_____
###Markdown
向量加法现在可以用另一个 `compute` 运算来描述向量加法结果张量 `C`。`compute` 函数采用张量的形状,以及描述张量每个位置的计算规则的 lambda 函数。此阶段没有计算发生,因为只是声明了计算应该如何完成。
###Code
# 描述 VTA 中的向量加法
fcompute = lambda *i: A_buf(*i).astype(env.acc_dtype) + B_buf(*i).astype(env.acc_dtype)
C_buf = te.compute(shape, fcompute, name="C_buf")
###Output
_____no_output_____
###Markdown
Casting 结果计算完成后,需要将 VTA 计算的结果发送回主存储器(main memory)```{admonition} 内存存储限制:class: alert alert-infoVTA 的特点之一是,它只支持窄化(narrow) `env.inp_dtype` 数据类型格式的 DRAM 存储。这让我们能够减少内存传输的数据 footprint(详见基本矩阵乘法的例子)。```对窄化的输入激活数据格式执行最后一个 typecast 运算。
###Code
# 转换为输出类型,并发送到 main memory
fcompute = lambda *i: C_buf(*i).astype(env.inp_dtype)
C = te.compute(shape, fcompute, name="C")
###Output
_____no_output_____
###Markdown
这就结束了本教程的计算声明部分。 调度计算虽然上面的几行描述了计算规则,但我们可以通过许多方式得到 `C`。TVM 要求用户提供一种名为 **调度** (*schedule*) 的计算实现。调度是对原始计算的一组变换,它在不影响正确性的情况下变换计算的实现。这个简单的 VTA 编程教程旨在演示基本的调度变换,将原始调度映射到 VTA 硬件原语(primitives)。 默认调度在构造了调度之后,默认情况下,调度会以如下方式计算 `C`:
###Code
s = te.create_schedule(C.op)
# 查看生成的调度
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(int32), int32, [1024], []),
B: Buffer(B_2: Pointer(int32), int32, [1024], []),
C: Buffer(C_2: Pointer(int8), int8, [1024], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, int32, [1, 64, 1, 16], []), B_1: B_3: Buffer(B_2, int32, [1, 64, 1, 16], []), C_1: C_3: Buffer(C_2, int8, [1, 64, 1, 16], [])} {
allocate(A_buf: Pointer(global int32), int32, [1024]), storage_scope = global;
allocate(B_buf: Pointer(global int32), int32, [1024]), storage_scope = global {
for (i1: int32, 0, 64) {
for (i3: int32, 0, 16) {
let cse_var_1: int32 = ((i1*16) + i3)
A_buf_1: Buffer(A_buf, int32, [1024], [])[cse_var_1] = A[cse_var_1]
}
}
for (i1_1: int32, 0, 64) {
for (i3_1: int32, 0, 16) {
let cse_var_2: int32 = ((i1_1*16) + i3_1)
B_buf_1: Buffer(B_buf, int32, [1024], [])[cse_var_2] = B[cse_var_2]
}
}
for (i1_2: int32, 0, 64) {
for (i3_2: int32, 0, 16) {
let cse_var_3: int32 = ((i1_2*16) + i3_2)
A_buf_2: Buffer(A_buf, int32, [1024], [])[cse_var_3] = (A_buf_1[cse_var_3] + B_buf_1[cse_var_3])
}
}
for (i1_3: int32, 0, 64) {
for (i3_3: int32, 0, 16) {
let cse_var_4: int32 = ((i1_3*16) + i3_3)
C[cse_var_4] = cast(int8, A_buf_2[cse_var_4])
}
}
}
}
###Markdown
虽然此调度是合理的,但它不会编译到 VTA。为了获得正确的代码生成(code generation),需要应用调度原语(scheduling primitives)和代码注解(code annotation),将调度变换为可以直接 lower 到 VTA 硬件 intrinsics。其中包括:- DMA copy 运算将把全局作用域的张量复制到局部作用域的张量。- 执行向量加法的向量 ALU 运算。 Buffer 作用域首先,设置复制 buffer 的作用域,以指示 TVM 这些中间张量将存储在 VTA 的 on-chip SRAM buffer 中。下面,告诉 TVM `A_buf`、`B_buf`、`C_buf` 将存在于 VTA 的 on-chip *accumulator buffer* 中,该 buffer 作为 VTA 的通用寄存器(register)文件。将中间张量的作用域设置为 VTA 的 on-chip accumulator buffer
###Code
s[A_buf].set_scope(env.acc_scope)
s[B_buf].set_scope(env.acc_scope)
s[C_buf].set_scope(env.acc_scope)
###Output
_____no_output_____
###Markdown
DMA 传输需要调度 DMA 传输,以便将存储在 DRAM 中的数据在 VTA 片上 buffer 之间来回移动。插入 `dma_copy` pragmas 来告诉编译器,复制运算将通过 DMA 批量执行,这在硬件加速器中很常见。使用 DMA pragma 标记 buffer 副本,将复制循环映射到 DMA transfer 运算:
###Code
s[A_buf].pragma(s[A_buf].op.axis[0], env.dma_copy)
s[B_buf].pragma(s[B_buf].op.axis[0], env.dma_copy)
s[C].pragma(s[C].op.axis[0], env.dma_copy)
###Output
_____no_output_____
###Markdown
ALU 运算VTA 有向量 ALU,可以在累加器 buffer 中对张量执行向量运算。为了告诉 TVM 给定的运算需要映射到 VTA 的 vector ALU,需要显式地用 `env.alu` pragma 标记 vector 加法循环。告诉 TVM 计算需要在 VTA 的向量 ALU 上执行:
###Code
s[C_buf].pragma(C_buf.op.axis[0], env.alu)
# 查看最终的 schedule
print(vta.lower(s, [A, B, C], simple_mode=True))
###Output
@main = primfn(A_1: handle, B_1: handle, C_1: handle) -> ()
attr = {"from_legacy_te_schedule": True, "global_symbol": "main", "tir.noalias": True}
buffers = {A: Buffer(A_2: Pointer(int32), int32, [1024], []),
B: Buffer(B_2: Pointer(int32), int32, [1024], []),
C: Buffer(C_2: Pointer(int8), int8, [1024], [])}
buffer_map = {A_1: A, B_1: B, C_1: C}
preflattened_buffer_map = {A_1: A_3: Buffer(A_2, int32, [1, 64, 1, 16], []), B_1: B_3: Buffer(B_2, int32, [1, 64, 1, 16], []), C_1: C_3: Buffer(C_2, int8, [1, 64, 1, 16], [])} {
attr [IterVar(vta: int32, (nullptr), "ThreadIndex", "vta")] "coproc_scope" = 2 {
@tir.call_extern("VTALoadBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), A_2, 0, 64, 1, 64, 0, 0, 0, 0, 0, 3, dtype=int32)
@tir.call_extern("VTALoadBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), B_2, 0, 64, 1, 64, 0, 0, 0, 0, 64, 3, dtype=int32)
attr [IterVar(vta, (nullptr), "ThreadIndex", "vta")] "coproc_uop_scope" = "VTAPushALUOp" {
@tir.call_extern("VTAUopLoopBegin", 64, 1, 1, 0, dtype=int32)
@tir.vta.uop_push(1, 0, 0, 64, 0, 2, 0, 0, dtype=int32)
@tir.call_extern("VTAUopLoopEnd", dtype=int32)
}
@tir.vta.coproc_dep_push(2, 3, dtype=int32)
}
attr [IterVar(vta, (nullptr), "ThreadIndex", "vta")] "coproc_scope" = 3 {
@tir.vta.coproc_dep_pop(2, 3, dtype=int32)
@tir.call_extern("VTAStoreBuffer2D", @tir.tvm_thread_context(@tir.vta.command_handle(, dtype=handle), dtype=handle), 0, 4, C_2, 0, 64, 1, 64, dtype=int32)
}
@tir.vta.coproc_sync(, dtype=int32)
}
###Markdown
这就结束了本教程的调度部分。 TVM 计算在完成指定调度之后,可以将它编译成 TVM 函数。默认情况下,TVM 编译成可以直接从 python 调用的类型消除(type-erased)函数。在下面一行中,使用 {func}`tvm.build` 来创建函数。`build` 函数接受调度、函数的期望签名(包括输入和输出)以及想要编译的目标语言。
###Code
my_vadd = vta.build(
s, [A, B, C], tvm.target.Target("ext_dev", host=env.target_host), name="my_vadd"
)
###Output
_____no_output_____
###Markdown
保存 ModuleTVM 把模块保存到文件中,这样以后就可以加载回来了。这被称为提前编译(ahead-of-time compilation),可以节省一些编译时间。更重要的是,这允许在开发机器上交叉编译可执行文件,并通过 RPC 将其发送到 Pynq FPGA 板上执行。将编译后的模块写入 object 文件。
###Code
temp = utils.tempdir()
my_vadd.save(temp.relpath("vadd.o"))
###Output
_____no_output_____
###Markdown
通过 RPC 发送可执行文件:
###Code
remote.upload(temp.relpath("vadd.o"))
###Output
_____no_output_____
###Markdown
载入 Module可以从文件系统加载编译后的模块来运行代码。
###Code
f = remote.load_module("vadd.o")
###Output
_____no_output_____
###Markdown
运行函数编译后的 TVM 函数使用简洁的 C API,可以被任何语言调用。TVM 用 python 提供了数组 API 来帮助快速测试和原型化。数组 API 是基于 [DLPack](https://github.com/dmlc/dlpack) 标准的。- 首先创建远程上下文(用于 Pynq 上的远程执行)。- 然后 `tvm.nd.array` 对数据进行相应的格式化。- `f()` 运行实际的计算。- `numpy()` 将结果数组以可解释的格式复制回来。随机初始化 A 和 B 数组,int 范围为 $(-128, 128]$:
###Code
size = o * env.BATCH, m * env.BLOCK_OUT
A_orig = np.random.randint(-128, 128, size=size).astype(A.dtype)
B_orig = np.random.randint(-128, 128, size=size).astype(B.dtype)
###Output
_____no_output_____
###Markdown
应用 packing 到 A 和 B 数组从 2D 到 4D packed layout:
###Code
A_packed = A_orig.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
B_packed = B_orig.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
###Output
_____no_output_____
###Markdown
获取远程设备的上下文:
###Code
ctx = remote.ext_dev(0)
###Output
_____no_output_____
###Markdown
使用 {func}`tvm.nd.array` 将输入/输出数组格式化为 DLPack 标准:
###Code
A_nd = tvm.nd.array(A_packed, ctx)
B_nd = tvm.nd.array(B_packed, ctx)
C_nd = tvm.nd.array(np.zeros((o, m, env.BATCH, env.BLOCK_OUT)).astype(C.dtype), ctx)
###Output
_____no_output_____
###Markdown
调用模块来执行计算:
###Code
f(A_nd, B_nd, C_nd)
###Output
_____no_output_____
###Markdown
验证 Correctness使用 `numpy` 计算引用的结果,并断言矩阵乘法的输出确实是正确的:
###Code
C_ref = (A_orig.astype(env.acc_dtype) + B_orig.astype(env.acc_dtype)).astype(C.dtype)
C_ref = C_ref.reshape(o, env.BATCH, m, env.BLOCK_OUT).transpose((0, 2, 1, 3))
np.testing.assert_equal(C_ref, C_nd.numpy())
print("向量加法测试成功!")
###Output
向量加法测试成功!
|
synthetic/simulate_data.ipynb | ###Markdown
Synthetic data simulationThis notebook generates a dataset of a synthetic cell acquired in 3 channels and exported in h5 format. The movie has 40 frames. The top edge of the cell moves out in frames 0-20 and backwards in frames 20-40. The first channel represents a segmentation channel. The two others contain a gradient of intensity with max at the top edge. That signal increases and decreases over ~20 frames but has a shift of 4 frames between channel 2 and 3.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import binary_fill_holes
from morphodynamics.splineutils import splevper, spline_to_param_image, fit_spline
import skimage.filters
import skimage.util
import ipywidgets as ipw
import h5py
from pathlib import Path
%matplotlib widget
np.random.seed(42)
###Output
_____no_output_____
###Markdown
The simulated data are generated by starting with a circular cell and to progressively deform it. Random displacement is applied on all points. Additionally a constant positive/negative force is applied on the top region of the cell, making it expand and retract.
###Code
width = 100
height = 100
radius = 10
position = np.array([50,50])
circle = position+ np.array([[radius*np.cos(x), radius*np.sin(x)] for x in np.arange(0,2*np.pi, 0.01)])
circle_or = circle.copy()
dist_from_point = np.array([np.linalg.norm(x-circle[0,:]) for x in circle])
dist_from_point[dist_from_point < 10] = 0
steps = 40
image_stack = np.zeros((steps, height, width))
grad_image = (np.ones((height, width))*np.arange(0,width)).T
grad_image = grad_image/width
vert_stack = grad_image*np.ones((steps, height, width))
vert_stack = np.rollaxis(vert_stack,0,3)
wave1 = np.sin(-0.3+0.15*np.arange(0,steps))
wave1[wave1<0]=0
wave2 = np.sin(-0.9+0.15*np.arange(0,steps))
wave2[wave2<0]=0
vert_stack1 = vert_stack * wave1
vert_stack2 = vert_stack * wave2
vert_stack1 = np.rollaxis(vert_stack1,2,0)
vert_stack2 = np.rollaxis(vert_stack2,2,0)
for i in range(40):
if i<20:
fact = -0.5
else:
fact = 0.5
move_noise = np.random.normal(loc=0,scale=0.5, size=circle.shape)
move_noise[:,0] += fact*dist_from_point
circle = circle + 0.1*move_noise
circle_s = fit_spline(circle, 100)
rasterized = spline_to_param_image(1000, (100,100), circle_s, deltat=0)
image = binary_fill_holes(rasterized > -1).astype(np.uint8)
image_stack[i,:,:] = image
temp = vert_stack1[i,:,:]
temp[image==0] =0
temp = vert_stack2[i,:,:]
temp[image==0] =0
#vert_stack1[image==0] =0
fig, ax = plt.subplots()
plt.plot(wave1)
plt.plot(wave2)
fig.suptitle('Intensity variation in channel 2 and 3');
# make stacks microscopy-like by blurring and adding noise.
im_stack_gauss = skimage.filters.gaussian(image_stack, preserve_range=True)
im_stack_noise = skimage.util.random_noise(im_stack_gauss,'gaussian')
im_stack_noise = skimage.util.img_as_ubyte(im_stack_noise)
signal1_gauss = skimage.filters.gaussian(vert_stack1, preserve_range=True)
signal1_noise = skimage.util.random_noise(signal1_gauss,'gaussian')
signal1_noise = skimage.util.img_as_ubyte(signal1_noise)
signal2_gauss = skimage.filters.gaussian(vert_stack2, preserve_range=True)
signal2_noise = skimage.util.random_noise(signal2_gauss,'gaussian')
signal2_noise = skimage.util.img_as_ubyte(signal2_noise)
def update_fig(ind):
#im.set_array(im_stack_noise[ind,:,:])
im.set_array(signal1_noise[ind,:,:])
fig, ax = plt.subplots()
#im = ax.imshow(im_stack_noise[0,:,:], cmap = 'gray')
im = ax.imshow(signal1_noise[0,:,:], cmap = 'gray')
ipw.HBox([ipw.interactive(update_fig, ind=ipw.IntSlider(0,0,39))])
# export data as h5 files
main_folder = Path('./data')
h5_name = main_folder.joinpath('synth_ch1.h5')
with h5py.File(h5_name, "w") as f_out:
dset = f_out.create_dataset("volume", data=im_stack_noise, chunks=True, compression="gzip", compression_opts=1)
main_folder = Path('./data')
h5_name = main_folder.joinpath('synth_ch2.h5')
with h5py.File(h5_name, "w") as f_out:
dset = f_out.create_dataset("volume", data=signal1_noise, chunks=True, compression="gzip", compression_opts=1)
main_folder = Path('./data')
h5_name = main_folder.joinpath('synth_ch3.h5')
with h5py.File(h5_name, "w") as f_out:
dset = f_out.create_dataset("volume", data=signal2_noise, chunks=True, compression="gzip", compression_opts=1)
###Output
_____no_output_____ |
how-to-use-azureml/explain-model/explain-tabular-data-local/explain-local-sklearn-binary-classification.ipynb | ###Markdown
Breast cancer diagnosis classification with scikit-learn (run model explainer locally) ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/explain-model/explain-tabular-data-local/explain-local-sklearn-binary-classification.png) Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_breast_cancer
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
breast_cancer_data = load_breast_cancer()
classes = breast_cancer_data.target_names.tolist()
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(breast_cancer_data.data, breast_cancer_data.target, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features=breast_cancer_data.feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
2. Load visualization dashboard
###Code
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____
###Markdown
Breast cancer diagnosis classification with scikit-learn (run model explainer locally) ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/explain-model/explain-tabular-data-local/explain-local-sklearn-binary-classification.png) Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_breast_cancer
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
breast_cancer_data = load_breast_cancer()
classes = breast_cancer_data.target_names.tolist()
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(breast_cancer_data.data, breast_cancer_data.target, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features=breast_cancer_data.feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
2. Load visualization dashboard
###Code
# Note you will need to have extensions enabled prior to jupyter kernel starting
!jupyter nbextension install --py --sys-prefix azureml.contrib.explain.model.visualize
!jupyter nbextension enable --py --sys-prefix azureml.contrib.explain.model.visualize
# Or, in Jupyter Labs, uncomment below
# jupyter labextension install @jupyter-widgets/jupyterlab-manager
# jupyter labextension install microsoft-mli-widget
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____
###Markdown
Breast cancer diagnosis classification with scikit-learn (run model explainer locally) Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_breast_cancer
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
breast_cancer_data = load_breast_cancer()
classes = breast_cancer_data.target_names.tolist()
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(breast_cancer_data.data, breast_cancer_data.target, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features=breast_cancer_data.feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
2. Load visualization dashboard
###Code
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____ |
Kimmo Tolonen - Case 2.ipynb | ###Markdown
Week 4. Case 2Kimmo Eemil Juhani TolonenLast edited: 25.2.2018Cognitive Systems for Health Technology ApplicationsHelsinki Metropolia University of Applied Sciences 1. ObjectivesThe aim of this Case 2 is to learn use convolutional neural networks to classify medical images. I downloaded datafile full of diabetic retinopathy images, thousands and thousands of images. Three different folders for training, testing and validation. Two different types of images nonsymptons and symptons. First I import all libraries that I need, then I built neural network and process data and so on. All that found below from this notebook file. Note (25.2)Okay now I take little risk here, I'm not happy with the last result that I get from training last time, result was only 0,73. I did little changes and now I run all, one more time. It is 6:32pm right now and deadline is under 3 hours from now. So let see what happens and how much time this takes. - Kimmo 2. Import libraries
###Code
# Code, model and history filenames
my_code = 'gpu_template.py'
model_filename = 'case_2_model.h5'
history_filename = 'case_2_history.p'
# Info for the operator
import time
print('----------------------------------------------------------------------')
print(' ')
print('Starting the code (', time.asctime(), '):', my_code)
print(' ')
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras import layers
from keras import models
import pickle
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
3. Building network This chapter is the place where building model happens. I determined batch sizes and number of epoches allready here for data processing, which is later in this code. Adding layers happens here too. I tried VGG16 here, but that gave only 0,70 accuracy for testing the data, so I decided drop that out.
###Code
# Training parameters
batch_size = 40
epochs = 20
steps_per_epoch = 20
validation_steps = 20
image_height = 150
image_width = 150
# Build the model
model = models.Sequential()
model.add(layers.Conv2D(64, (3, 3), activation = 'relu',
input_shape = (image_height, image_width, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation = 'relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation = 'relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(138, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
###Output
_____no_output_____
###Markdown
4. Data preprocessing
###Code
# Dataset directories and labels files
train_dir = "..\\..\\dataset2\\train"
validation_dir = "..\\..\\dataset2\\validation"
test_dir = "..\\..\\dataset2\\test"
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
# Create datagenerators for training, validation and testing
train_datagen = ImageDataGenerator(rescale = 1./255,
zoom_range = 0.2,
horizontal_flip = True)
validation_datagen = ImageDataGenerator(rescale = 1./255)
test_datagen = ImageDataGenerator(rescale = 1./255)
# shapes
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
# Generator for validation dataset
print('Validation dataset.')
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size = (image_height, image_width),
batch_size = batch_size,
class_mode = 'binary')
labels_batch
###Output
_____no_output_____
###Markdown
5. Modeling
###Code
# Compile the model
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(),
metrics=['acc'])
# This makes file to my folder from next training
model.save('case_2_run_1.h5')
# Model training and show time, how much time it takes...and some times it takes a lot...
t1 = time.time()
h = model.fit_generator(
train_generator,
steps_per_epoch = steps_per_epoch,
verbose = 1,
epochs = epochs,
validation_data = validation_generator,
validation_steps = validation_steps)
t2 = time.time()
# Store the elapsed time into history
h.history.update({'time_elapsed': t2 - t1})
print(' ')
print('Total elapsed time for training: {:.3f} minutes'.format((t2-t1)/60))
print(' ')
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_loss, test_acc = model.evaluate_generator(test_generator, steps = 21)
# Test accuracy
print('test_acc:', test_acc)
###Output
Found 413 images belonging to 2 classes.
test_acc: 0.791767552459
###Markdown
6. Results I trained that model many times and I did not get over 0,80 overall. I switch number of epoches, batch sizes, size of pictures, layer sizes and many other things. Some times training took over hour in my laptop. One time even my laptop crashed from overheating, maybe from that or from something else, but right after training completed I put laptop aside from table and it crashed. I trained these results at sunday evening and I decided to leave it here. I did little additions to results when I went trought some powerpoints from Oma, copied them and added them this notebook. Compiled explanations can be found below at "Conclusions" section.
###Code
import matplotlib.pyplot as plt
acc = h.history['acc']
val_acc = h.history['val_acc']
loss = h.history['loss']
val_loss = h.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# Predict the Score
y_true = np.zeros(413)
y_score = np.zeros(413)
sample_count = 413
i = 0
for inputs_batch, labels_batch in test_generator:
predicts_batch = model.predict(inputs_batch)
L = labels_batch.shape[0]
index = range(i, i + L)
y_true[index] = labels_batch.ravel()
y_score[index] = predicts_batch.ravel()
i = i + L
if i >= sample_count:
break
from sklearn.metrics import roc_curve, roc_auc_score
fpr, tpr, thresholds = roc_curve(y_true, y_score)
auc = roc_auc_score(y_true, y_score)
plt.figure()
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], '--')
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve AUC = {:.3f}'.format(auc))
plt.show()
plt.figure()
plt.plot(thresholds, 1-fpr, label = 'specificity')
plt.plot(thresholds, tpr, label = 'sensitivity')
plt.legend()
plt.grid()
plt.xlabel('Threshold value')
plt.show()
# Import more libraries from sklearn.
from sklearn.metrics import accuracy_score, precision_score, f1_score, confusion_matrix
from sklearn.metrics import classification_report, recall_score
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import average_precision_score
# Select the threshold to maximize both specificity and sensitivity
th = 0.3
acc = accuracy_score(y_true, y_score > th)
prec = precision_score(y_true, y_score > th)
f1 = f1_score(y_true, y_score > th)
recall = recall_score(y_true, y_score > th)
print('Accuracy: {:.3f}'.format(acc))
print('Precision: {:.3f}'.format(prec))
print('Recall: {:.3f}'.format(recall))
print('F1: {:.3f}'.format(f1))
print('Classification report')
print(classification_report(y_true, y_score > th, labels = [1.0, 0.0], target_names = ['Disease', 'Healthy']))
tn, fp, fn, tp = confusion_matrix(y_true, y_score > th).ravel()
print(' Confusion matrix')
print(' True condition')
print(' Positive Negative Sum')
print('Predicted | Positive {:8} {:8} {:8}'.format(tp, fp, tp + fp))
print('condition | Negative {:8} {:8} {:8}'.format(fn, tn, fn + tn))
print(' Sum {:8} {:8} {:8}'.format(tp + fn, fp + tn, tp + fp + fn + tn))
print(' ')
print('Sensitivity: {:.3f}'.format(tp/(tp+fn)))
print('Specificity: {:.3f}'.format(tn/(tn+fp)))
###Output
Confusion matrix
True condition
Positive Negative Sum
Predicted | Positive 109 107 216
condition | Negative 10 187 197
Sum 119 294 413
Sensitivity: 0.916
Specificity: 0.636
|
03.Set.ipynb | ###Markdown
Set ` = set().add() Or: |= {}.update() Or: |= = .union() Or: | = .intersection() Or: & = .difference() Or: - = .symmetric_difference() Or: ^ = .issubset() Or: = .issuperset() Or: >= `
###Code
set1 = {1,2,4,5}
type(set1)
list1 = [1,2,3,4]
set2 = set(list1)
set2
set2.add(5)
set1.union(set2)
set1.intersection(set2)
set2.difference(set1)
set2.add(6)
set2.add(7)
set2.symmetric_difference(set1)
set2.issubset(set1)
set2.issuperset(set1)
###Output
_____no_output_____
###Markdown
` = .pop() Raises KeyError if empty..remove() Raises KeyError if missing..discard() Doesn't raise an error.`
###Code
set2.pop()
set2
set2.remove(7)
set2
set2.discard(10)
set2.discard(6)
set2
###Output
_____no_output_____ |
doc/ipython-notebooks/ica/bss_image.ipynb | ###Markdown
Blind Source Separation on Images with Shogun by Kevin Hughes This notebook illustrates Blind Source Seperation(BSS) on images using Independent Component Analysis (ICA) in Shogun. This is very similar to the BSS audio notebook except that here we have used images instead of audio signals. The first step is to load 2 images from the Shogun data repository:
###Code
# change to the shogun-data directory
import os
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
from PIL import Image
import numpy as np
# Load Images as grayscale images and convert to numpy arrays
s1 = np.asarray(Image.open("lena.jpg").convert('L'))
s2 = np.asarray(Image.open("monalisa.jpg").convert('L'))
# Save Image Dimensions
# we'll need these later for reshaping the images
rows = s1.shape[0]
cols = s1.shape[1]
###Output
_____no_output_____
###Markdown
Displaying the images using pylab:
###Code
%matplotlib inline
import pylab as pl
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(s1, cmap=pl.gray()) # set the color map to gray, only needs to be done once!
ax2.imshow(s2)
###Output
_____no_output_____
###Markdown
In our previous ICA examples the input data or source signals were already 1D but these images are obviously 2D. One common way to handle this case is to simply "flatten" the 2D image matrix into a 1D row vector. The same idea can also be applied to 3D data, for example a 3 channel RGB image can be converted a row vector by reshaping each 2D channel into a row vector and then placing them after each other length wise.Lets prep the data:
###Code
# Convert Images to row vectors
# and stack into a Data Matrix
S = np.c_[s1.flatten(), s2.flatten()].T
###Output
_____no_output_____
###Markdown
It is pretty easy using a nice library like numpy.Next we need to mix our source signals together. We do this exactly the same way we handled the audio data - take a look!
###Code
# Mixing Matrix
A = np.array([[1, 0.5], [0.5, 1]])
# Mix Signals
X = np.dot(A,S)
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(X[0,:].reshape(rows,cols))
ax2.imshow(X[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Notice how we had to reshape from a 1D row vector back into a 2D matrix of the correct shape. There is also another nuance that I would like to mention here: pylab is actually doing quite a lot for us here that you might not be aware of. It does a pretty good job determining the value range of the image to be shown and then it applies the color map. Many other libraries (for example OpenCV's highgui) won't be this helpful and you'll need to remember to scale the image appropriately on your own before trying to display it. Now onto the exciting step, unmixing the images using ICA! Again this step is the same as when using Audio data. Again we need to reshape the images before viewing them and an additional nuance was to add the *-1 to the first separated signal. I did this after viewing the result the first time as the image was clearly inversed, this can happen because ICA can't necessarily capture the correct phase.
###Code
import shogun as sg
mixed_signals = sg.features(X)
# Separating
jade = sg.transformer('Jade')
jade.fit(mixed_signals)
signals = jade.transform(mixed_signals)
S_ = signals.get('feature_matrix')
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(S_[0,:].reshape(rows,cols) *-1)
ax2.imshow(S_[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Blind Source Separation on Images with Shogun by Kevin Hughes This notebook illustrates Blind Source Seperation(BSS) on images using Independent Component Analysis (ICA) in Shogun. This is very similar to the BSS audio notebook except that here we have used images instead of audio signals. The first step is to load 2 images from the Shogun data repository:
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
from PIL import Image
import numpy as np
# Load Images as grayscale images and convert to numpy arrays
s1 = np.asarray(Image.open("lena.jpg").convert('L'))
s2 = np.asarray(Image.open("monalisa.jpg").convert('L'))
# Save Image Dimensions
# we'll need these later for reshaping the images
rows = s1.shape[0]
cols = s1.shape[1]
###Output
_____no_output_____
###Markdown
Displaying the images using pylab:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(s1, cmap=plt.gray()) # set the color map to gray, only needs to be done once!
ax2.imshow(s2)
###Output
_____no_output_____
###Markdown
In our previous ICA examples the input data or source signals were already 1D but these images are obviously 2D. One common way to handle this case is to simply "flatten" the 2D image matrix into a 1D row vector. The same idea can also be applied to 3D data, for example a 3 channel RGB image can be converted a row vector by reshaping each 2D channel into a row vector and then placing them after each other length wise.Lets prep the data:
###Code
# Convert Images to row vectors
# and stack into a Data Matrix
S = np.c_[s1.flatten(), s2.flatten()].T
###Output
_____no_output_____
###Markdown
It is pretty easy using a nice library like numpy.Next we need to mix our source signals together. We do this exactly the same way we handled the audio data - take a look!
###Code
# Mixing Matrix
A = np.array([[1, 0.5], [0.5, 1]])
# Mix Signals
X = np.dot(A,S)
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(X[0,:].reshape(rows,cols))
ax2.imshow(X[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Notice how we had to reshape from a 1D row vector back into a 2D matrix of the correct shape. There is also another nuance that I would like to mention here: pylab is actually doing quite a lot for us here that you might not be aware of. It does a pretty good job determining the value range of the image to be shown and then it applies the color map. Many other libraries (for example OpenCV's highgui) won't be this helpful and you'll need to remember to scale the image appropriately on your own before trying to display it. Now onto the exciting step, unmixing the images using ICA! Again this step is the same as when using Audio data. Again we need to reshape the images before viewing them and an additional nuance was to add the *-1 to the first separated signal. I did this after viewing the result the first time as the image was clearly inversed, this can happen because ICA can't necessarily capture the correct phase.
###Code
import shogun as sg
mixed_signals = sg.features(X)
# Separating
jade = sg.transformer('Jade')
jade.fit(mixed_signals)
signals = jade.transform(mixed_signals)
S_ = signals.get('feature_matrix')
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(S_[0,:].reshape(rows,cols) *-1)
ax2.imshow(S_[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Blind Source Separation on Images with Shogun by Kevin Hughes This notebook illustrates Blind Source Seperation(BSS) on images using Independent Component Analysis (ICA) in Shogun. This is very similar to the BSS audio notebook except that here we have used images instead of audio signals. The first step is to load 2 images from the Shogun data repository:
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
from PIL import Image
import numpy as np
# Load Images as grayscale images and convert to numpy arrays
s1 = np.asarray(Image.open("lena.jpg").convert('L'))
s2 = np.asarray(Image.open("monalisa.jpg").convert('L'))
# Save Image Dimensions
# we'll need these later for reshaping the images
rows = s1.shape[0]
cols = s1.shape[1]
###Output
_____no_output_____
###Markdown
Displaying the images using pylab:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(s1, cmap=plt.gray()) # set the color map to gray, only needs to be done once!
ax2.imshow(s2)
###Output
_____no_output_____
###Markdown
In our previous ICA examples the input data or source signals were already 1D but these images are obviously 2D. One common way to handle this case is to simply "flatten" the 2D image matrix into a 1D row vector. The same idea can also be applied to 3D data, for example a 3 channel RGB image can be converted a row vector by reshaping each 2D channel into a row vector and then placing them after each other length wise.Lets prep the data:
###Code
# Convert Images to row vectors
# and stack into a Data Matrix
S = np.c_[s1.flatten(), s2.flatten()].T
###Output
_____no_output_____
###Markdown
It is pretty easy using a nice library like numpy.Next we need to mix our source signals together. We do this exactly the same way we handled the audio data - take a look!
###Code
# Mixing Matrix
A = np.array([[1, 0.5], [0.5, 1]])
# Mix Signals
X = np.dot(A,S)
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(X[0,:].reshape(rows,cols))
ax2.imshow(X[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Notice how we had to reshape from a 1D row vector back into a 2D matrix of the correct shape. There is also another nuance that I would like to mention here: pylab is actually doing quite a lot for us here that you might not be aware of. It does a pretty good job determining the value range of the image to be shown and then it applies the color map. Many other libraries (for example OpenCV's highgui) won't be this helpful and you'll need to remember to scale the image appropriately on your own before trying to display it. Now onto the exciting step, unmixing the images using ICA! Again this step is the same as when using Audio data. Again we need to reshape the images before viewing them and an additional nuance was to add the *-1 to the first separated signal. I did this after viewing the result the first time as the image was clearly inversed, this can happen because ICA can't necessarily capture the correct phase.
###Code
import shogun as sg
mixed_signals = sg.create_features(X)
# Separating
jade = sg.create_transformer('Jade')
jade.fit(mixed_signals)
signals = jade.transform(mixed_signals)
S_ = signals.get('feature_matrix')
# Show Images
f,(ax1,ax2) = plt.subplots(1,2)
ax1.imshow(S_[0,:].reshape(rows,cols) *-1)
ax2.imshow(S_[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Blind Source Separation on Images with Shogun by Kevin Hughes This notebook illustrates Blind Source Seperation(BSS) on images using Independent Component Analysis (ICA) in Shogun. This is very similar to the BSS audio notebook except that here we have used images instead of audio signals. The first step is to load 2 images from the Shogun data repository:
###Code
# change to the shogun-data directory
import os
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import Image
import numpy as np
# Load Images as grayscale images and convert to numpy arrays
s1 = np.asarray(Image.open("lena.jpg").convert('L'))
s2 = np.asarray(Image.open("monalisa.jpg").convert('L'))
# Save Image Dimensions
# we'll need these later for reshaping the images
rows = s1.shape[0]
cols = s1.shape[1]
###Output
_____no_output_____
###Markdown
Displaying the images using pylab:
###Code
%matplotlib inline
import pylab as pl
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(s1, cmap=pl.gray()) # set the color map to gray, only needs to be done once!
ax2.imshow(s2)
###Output
_____no_output_____
###Markdown
In our previous ICA examples the input data or source signals were already 1D but these images are obviously 2D. One common way to handle this case is to simply "flatten" the 2D image matrix into a 1D row vector. The same idea can also be applied to 3D data, for example a 3 channel RGB image can be converted a row vector by reshaping each 2D channel into a row vector and then placing them after each other length wise.Lets prep the data:
###Code
# Convert Images to row vectors
# and stack into a Data Matrix
S = np.c_[s1.flatten(), s2.flatten()].T
###Output
_____no_output_____
###Markdown
It is pretty easy using a nice library like numpy.Next we need to mix our source signals together. We do this exactly the same way we handled the audio data - take a look!
###Code
# Mixing Matrix
A = np.array([[1, 0.5], [0.5, 1]])
# Mix Signals
X = np.dot(A,S)
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(X[0,:].reshape(rows,cols))
ax2.imshow(X[1,:].reshape(rows,cols))
###Output
_____no_output_____
###Markdown
Notice how we had to reshape from a 1D row vector back into a 2D matrix of the correct shape. There is also another nuance that I would like to mention here: pylab is actually doing quite a lot for us here that you might not be aware of. It does a pretty good job determining the value range of the image to be shown and then it applies the color map. Many other libraries (for example OpenCV's highgui) won't be this helpful and you'll need to remember to scale the image appropriately on your own before trying to display it. Now onto the exciting step, unmixing the images using ICA! Again this step is the same as when using Audio data. Again we need to reshape the images before viewing them and an additional nuance was to add the *-1 to the first separated signal. I did this after viewing the result the first time as the image was clearly inversed, this can happen because ICA can't necessarily capture the correct phase.
###Code
from shogun import features
from shogun import Jade
mixed_signals = features(X)
# Separating
jade = Jade()
signals = jade.apply(mixed_signals)
S_ = signals.get_real_matrix('feature_matrix')
# Show Images
f,(ax1,ax2) = pl.subplots(1,2)
ax1.imshow(S_[0,:].reshape(rows,cols) *-1)
ax2.imshow(S_[1,:].reshape(rows,cols))
###Output
_____no_output_____ |
05. Probability distributions.ipynb | ###Markdown
Probability distributions To understand probability distributions, let us first look at the concept of random variables, which are used to model probability distributions. **Random variable:** A variable whose values are numerical outcomes of some random process or A function that assigns values to each of an experiment's outcomes. It is generally denoted by $X$.Random variables are of two types:1. **Discrete random variables** can take a finite, countable number of values. For example, a dice roll can take values like 1, 2, 3, 4, 5 and 6. 2. **Continuous random variables** can take infinitely many values. Examples include temperature, height, and weight. The **probability mass function,** or **PMF** associated with a discrete random variable is a function that provides the probability that this variable is exactly equal to a certain discrete value.![](data/dpd.png)$$The\ graph\ of\ a\ probability\ mass\ function.\ All\ the\ values\ of\ this\ function\ must\ be\ non\ negative\ and\ sum\ up\ to\ 1.$$But for a continuous variable, we cannot find the absolute probability. Why? As we see, with continous variables the number of possible outcomes are infinite. For example: If we consider weight, it can be 25.0001 kgs, 25.0000003 kgs, and so on..So if we try to calculate absolute probabilty of weight to be 25 kgs it turns out to be zero.Hence, we use the **probability density function,** or **PDF** for continuous variables (the equivalent of PMF for discrete variables). The PDF is the probability that the value of a continuous random variable falls within a range of values. The **cumulative distribution function,** or **CDF** gives the probability of a random variable being less than or equal to a given value. It is the integral of the PDF and gives the area under the curve defined by the PDF up to a certain point.The common types of probability distributions for discrete random variables: Binomial, Uniform and Poisson. **Binomial Distribution** To understand the binomial distribution lets look at **binomial experiments**. A binomial experiment is an experiment that has the following properties:+ The experiment consists of $n$ repeated trials.+ Each trial has only two possible outcomes.+ The probability of success ($p$) and failure ($1-p$) is the same for each trial.+ Each trial is independent.A simple example is tossing an unbiased coin for $n$ times. In this example, the probability that the outcome might be heads can be considered equal to $p$ and $1-p$ for tails (the probabilities of mutually exclusive events that encompass all possible outcomes needs to sum up to one). Each time the coin is tossed the outcome is independent of all other trails.The **binomial distribution** describes the probability of obtaining $k$ successes in $n$ binomial experiments.If a random variable $X$ follows a binomial distribution, then the probability that $X = k$ successes can be found by the following formula:$$P(X=k) = ^n C_k p^k (1-p)^{n-k}$$Where, $p$ is the probability of success $(1-p)$ is the probability of failure $n$ is the number of trials The binomial distribution has the following properties:+ Mean = $n*p$ (number of trials * probability of success) + Variance = $n*p*q$ (number of trials * probability of success * probability of failure) **Example:**By some estimates, twenty-percent (20%) of countrys population have no health insurance. Randomly sample n=15 people. Let X denote the number in the sample with no health insurance. 1. What is the probability that exactly 3 of the 15 sampled have no health insurance?2. What is the probability that at most one of those sampled has no health insurance?First part of the solution: **Calculating Binomial Probabilites**$$P(X=3) = ^{15}C_3(0.2)^3(0.8)^{12} = 0.25$$That is, there is a 25% chance, in sampling 15 random people, that we would find exactly 3 that had no health insurance.
###Code
import scipy.stats as stats
n, r, p = 15, 3, 0.2
stats.binom.pmf(r, n, p) # Using PMF
###Output
_____no_output_____
###Markdown
Second part of the solution: **Calculating Cumulative Binomial Probabilities** "At most one" means either 0 or 1 of those sampled have no health insurance. That is, we need to find:$$P(X\leq1) = P(X=0)+P(X=1)$$Using the probability mass function for a binomial random variable with n=15 and p=0.2, we have$$^{15}C_0 (0.2)^{0}(0.8)^{15} + ^{15}C_1 (0.2)^{1}(0.8)^{14} = 0.0352 + 0.1319 = 0.167$$That is, we have a 16.7% chance, in sampling 15 random people, that we would find at most one that had no health insurance.
###Code
import scipy.stats as stats
n, r, p = 15, 1, 0.2
stats.binom.cdf(r, n, p) # Using CDF
###Output
_____no_output_____
###Markdown
**Effect of n and p on Shape**1. For small *p* and small *n*, the binomial distribution is what we call skewed right. 2. For large *p* and small *n*, the binomial distribution is what we call skewed left. 3. For *p*=0.5 and large and small *n* , the binomial distribution is what we call symmetric. 4. For small *p* and large *n*, the binomial distribution approaches symmetry.
###Code
from numpy import random
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(8,6))
sns.set_style("whitegrid")
sample1 = random.binomial(n = 15, p = 0.2, size = 250)
sample2 = random.binomial(n = 15, p = 0.8, size = 250)
sample3 = random.binomial(n = 15, p = 0.5, size = 250)
sample4 = random.binomial(n = 40, p = 0.2, size = 250)
sns.kdeplot(sample1, label="sample1")
sns.kdeplot(sample2, label="sample2")
sns.kdeplot(sample3, label="sample3")
sns.kdeplot(sample4, label="sample4")
plt.legend(labels=["sample1","sample2","sample3","sample4"])
plt.show()
###Output
_____no_output_____
###Markdown
NOTE: The **Bernoulli distribution** is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.$$P(n) = P^n (1-P)^{1-n}$$ Geometric distributionThe geometric distribution describes the probability of experiencing a certain amount of failures before experiencing the first success in a series of Bernoulli trials. A Bernoulli trial is an experiment with only two possible outcomes – “success” or “failure” – and the probability of success is the same each time the experiment is conducted. An example of a Bernoulli trial is a coin flip. The coin can only land on two sides (we could call heads a “success” and tails a “failure”) and the probability of success on each flip is 0.5, assuming the coin is fair.If a random variable X follows a geometric distribution, then the probability of experiencing k failures before experiencing the first success can be found by the following formula:$$P(X=k) = (1-p)^kp$$where:$k$ is number of failures before first success $p$ is probability of success on each trial For example, suppose we want to know how many times we'll have to flip a fair coin until it lands on heads. We can use the formula above to determine the probability of experiencing 0, 1, 2, 3 failures, etc. before the coin lands on heads:**Note:** The coin can experience 0 'failure' if it lands on heads on the first flip.$P(X=0) = (1-.5)^0(.5) = 0.5$$P(X=1) = (1-.5)^1(.5) = 0.25$$P(X=2) = (1-.5)^2(.5) = 0.125$$P(X=3) = (1-.5)^3(.5) = 0.0625$ Uniform distribution The uniform distribution is a probability distribution in which every value between an interval from $a$ to $b$ is equally likely to occur.If a random variable $X$ follows a uniform distribution, then the probability that $X$ takes on a value between $x_1$ and $x_2$ can be found by the following formula:$$P(x_1 < X < x_2) = \frac{(x_2 – x_1)}{(b – a)}$$where:$x_1$: the lower value of interest $x_2$: the upper value of interest $a$: the minimum possible value $b$: the maximum possible value ![](data/ud.png)For example, suppose the weight of dolphins is uniformly distributed between 100 pounds and 150 pounds.If we randomly select a dolphin at random, we can use the formula above to determine the probability that the chosen dolphin will weigh between 120 and 130 pounds:$P(120 < X < 130) = (130 – 120) / (150 – 100) = 10 / 50 = 0.2$The probability that the chosen dolphin will weigh between 120 and 130 pounds is 0.2.**Properties of the Uniform Distribution**The uniform distribution has the following properties:+ Mean: (a + b) / 2+ Median: (a + b) / 2+ Standard Deviation: √(b – a)2 / 12+ Variance: (b – a)2 / 12 **Poisson distribution**Again, to understand the Poisson distribution, we first have to understand what **Poisson experiments** are.A Poisson experiment is an experiment that has the following properties:+ The number of successes in the experiment can be counted.+ The mean number of successes that occurs during a specific interval of time (or space) is known.+ Each outcome is independent.+ The probability that a success will occur is proportional to the size of the intervalOne example of a Poisson experiment is the number of births per hour at a given hospital. For example, suppose a particular hospital experiences an average of 10 births per hour. This is a Poisson experiment because it has the following four properties:+ The number of successes in the experiment can be counted – We can count the number of births.+ The mean number of successes that occurs during a specific interval of time is known – It is known that an average of 10 births per hour occur.+ Each outcome is independent – The probability that one mother gives birth during a given hour is independent of the probability of another mother giving birth.+ The probability that a success will occur is proportional to the size of the interval – the longer the interval of time, the higher the probability that a birth will occur.We can use the Poisson distribution to answer questions about probabilities regarding this Poisson experiment such as:+ What is the probability that more than 12 births occur in a given hour?+ What is the probability that less than 5 births occur in a given hour?+ What is the probability that between 8 to 11 births occur in a given hour?If a random variable $X$ follows a Poisson distribution, then the probability that $X = k$ successes can be found by the following formula: $$P(x=k)=\frac{\lambda^k e^{-\lambda}}{k!}$$ where $P(x=k)$ is the probability of the event occurring $k$ number of times, $k$ is the number of occurrences of the event, and $\lambda$ represents the mean number of event that occur during a specific interval.The Poisson distribution can be used to calculate the number of occurrences that occur over a given period, for instance: + number of arrivals at a restaurant per hour+ number of work-related accidents occurring at a factory over a year+ number of customer complaints at a call center in a week Properties of a Poisson distribution:1. Mean=variance=$\lambda$. In a Poisson distribution, the mean and variance have the same numeric values. 2. The events are independent, random, and cannot occur at the same time. ![](data/poi.png)The horizontal axis is the index k, the number of occurrences. λ is the expected rate of occurrences. The vertical axis is the probability of k occurrences given λ. The function is defined only at integer values of k; the connecting lines are only guides for the eye.**Example:**In a subway station, the average number of ticket-vending machines out of operation is two. Assuming that the number of machines out of operation follows a Poisson distribution, calculate the probability that a given point in time:1. Exactly three machines are out of operation2. More than two machines are out of operation
###Code
import scipy.stats as stats
l, r = 2, 3
stats.poisson.pmf(r,l) # probability mass function
l, r = 2, 2
1-stats.poisson.cdf(r,l) # cumlative distribution function
###Output
_____no_output_____
###Markdown
**Continuous probability distributions**There are several continuous probability distributions, including the normal distribution, Student’s T distribution, the chi-square, and ANOVA distribution. **Normal distribution** A normal distribution is a symmetrical bell-shaped curve, defined by its mean ($\mu$) and standard deviation ($\sigma$)Characteristics of a normal distribution: 1. The central value ($\mu$) is also the mode and the median for a normal distribution2. Checking for normality: In a normal distribution, the difference between the 75th percentile value ($Q_3$) and the 50th percentile value (median or $Q_2$) equals the difference between the median ($Q_2$) and the 25th percentile($Q_1$). In other words, $$Q_3 - Q_2 = Q_2 - Q_1$$If the distribution is skewed, this equation does not hold. + In a right-skewed distribution, $(Q_3 − Q_2)> (Q_2 - Q_1)$ + In a left-skewed distribution, $(Q_2 - Q_1) > (Q_3 - Q_2)$ **Standard normal distribution** To standardize units and compare distributions with different means and variances, we use a standard normal distribution.Properties of a standard normal distribution:+ The standard normal distribution is a normal distribution with a mean value of 0 and a standard deviation as 1.+ Any normal distribution can be converted into standard normal distribution using the following formula:$$z = \frac {x-\mu}{\sigma}$$ where $\mu$ and $\sigma$ are the mean and variance of theoriginal normal distribution.**z-score** (also called a **standard score**) gives you an idea of how far from the mean a data point is.In a standard normal distribution, + 68.2% of the values lie within 1 standard deviation of the mean+ 95.4% of the values lie between 2 standard deviations of the mean+ 99.8% lie within 3 standard deviations of the mean+ The area under the standard normal distribution between any two points represents the proportion of values that lies between these two points. For instance, the area under the curve on either side of the mean is 0.5. Put in another way, 50% of the values lie on either side of the mean.![](https://www.sixsigmadaily.com/wp-content/uploads/sites/4/2012/08/Bell-Curve-Standard-Deviation.jpg)The standard normal distribution is a probability distribution, so the area under the curve between two points tells you the probability of variables taking on a range of values. The total area under the curve is 1 or 100%.Every z-score has an associated p-value that tells you the probability of all values below or above that z-score occuring. This is the area under the curve left or right of that z-score.![](data/snd.png) SciPy.Stats and VisualizationsNow that we have understanding of different distribution. Lets plot then and visualize. We will also learn how to compute useful values using scipy.stats module.
###Code
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**Norm.pdf value** Norm.pdf returns a PDF value. The following is the PDF value when $x=1$, $\mu=0$, $\sigma=1$.
###Code
norm.pdf(x=1.0, loc=0, scale=1)
fig, ax = plt.subplots()
x= np.arange(-4,4,0.001)
ax.plot(x, norm.pdf(x))
ax.set_ylim(0,0.45) # range
ax.axhline(y=0.24,xmax=0.61,color='r') # horizontal line
ax.axvline(x=1, ymax=0.53, color='r',alpha=0.5) # vertical line
xplot = ax.plot([1], [0.24], marker='o', markersize=8, color="red") # coordinate point
ax.set_yticks([]) # remove y axis label
ax.set_xticks([]) # remove x axis label
ax.set_xlabel('x',fontsize=20) # set x label
ax.set_ylabel('pdf(x)',fontsize=20,rotation=0) # set y label
ax.xaxis.set_label_coords(0.61, -0.02) # x label coordinate
ax.yaxis.set_label_coords(-0.1, 0.5) # y label coordinate
plt.show()
###Output
_____no_output_____
###Markdown
**Normal distribution PDF with different standard deviations** Let’s plot the probability distribution functions of a normal distribution where the mean has different standard deviations.scipy.norm.pdf has keywords, loc and scale. The location (loc) keyword specifies the mean and the scale (scale) keyword specifies the standard deviation.
###Code
fig, ax = plt.subplots()
x = np.linspace(-10,10,100)
stdvs = [1.0, 2.0, 3.0, 4.0]
for s in stdvs:
ax.plot(x, norm.pdf(x,scale=s), label='stdv=%.1f' % s)
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.set_title('Normal Distribution')
ax.legend(loc='best', frameon=True)
ax.set_ylim(0,0.45)
ax.grid(True)
###Output
_____no_output_____
###Markdown
**Normal distribution PDF with different means** Let’s plot probability distribution functions of normal distribution where the standard deviation is 1 and different means. The mean of the distribution determines the location of the center of the graph. As you can see in the above graph, the shape of the graph does not change by changing the mean, but the graph is translated horizontally.
###Code
fig, ax = plt.subplots()
x = np.linspace(-10,10,100)
means = [0.0, 1.0, 2.0, 5.0]
for mean in means:
ax.plot(x, norm.pdf(x,loc=mean), label='mean=%.1f' % mean)
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.set_title('Normal Distribution')
ax.legend(loc='best', frameon=True)
ax.set_ylim(0,0.45)
ax.grid(True)
###Output
_____no_output_____
###Markdown
**A cumulative normal distribution function** The cumulative distribution function of a random variable X, evaluated at x, is the probability that X will take a value less than or equal to x. Since the normal distribution is a continuous distribution, the shaded area of the curve represents the probability that X is less or equal than x. Using fill_between(x, y1, y2=0), it will fill up the area between two curves y1 and y2 which has the default value of 0.
###Code
fig, ax = plt.subplots()
# for distribution curve
x= np.arange(-4,4,0.001)
ax.plot(x, norm.pdf(x))
ax.set_title("Cumulative normal distribution")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
# for fill_between
px=np.arange(-4,1,0.01)
ax.set_ylim(0,0.5)
ax.fill_between(px,norm.pdf(px),alpha=0.5, color='b')
# for text
ax.text(-1,0.1,"cdf(x)", fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Given the mean of 3 and the standard deviation of 2, we can find the probability of $P(x<2)$. In norm.cdf, the location (loc) keyword specifies the mean and the scale (scale) keyword specifies the standard deviation.
###Code
from scipy.stats import norm
lessthan2=norm.cdf(x=2, loc=3, scale=2)
print(lessthan2)
fig, ax = plt.subplots()
# for distribution curve
x= np.arange(-4,10,0.001)
ax.plot(x, norm.pdf(x,loc=3,scale=2))
ax.set_title("N(3,$2^2$)")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
# for fill_between
px=np.arange(-4,2,0.01)
ax.set_ylim(0,0.25)
ax.fill_between(px,norm.pdf(px,loc=3,scale=2),alpha=0.5, color='b')
# for text
ax.text(-0.5,0.02,round(lessthan2,2), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
**Interval between variables**To find the probability of an interval between certain variables, you need to subtract cdf from another cdf. Let's find $P(0.5<X<2)$ with a mean of 1 and a standard deviation of 2.
###Code
norm(1, 2).cdf(2) - norm(1,2).cdf(0.5)
fig, ax = plt.subplots()
# for distribution curve
x= np.arange(-6,8,0.001)
ax.plot(x, norm.pdf(x,loc=1,scale=2))
ax.set_title("N(1,$2^2$)")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
px=np.arange(0.5,2,0.01)
ax.set_ylim(0,0.25)
ax.fill_between(px,norm.pdf(px,loc=1,scale=2),alpha=0.5, color='b')
pro=norm(1, 2).cdf(2) - norm(1,2).cdf(0.5)
ax.text(0.2,0.02,round(pro,2), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Survival functionTo find the probability of $𝑃(𝑋>4)$, we can use sf which is called the survival function and it returns 1-cdf. For example, norm.sf(x=4, loc=3, scale=2 returns the probability which is greater than $𝑥=4$,$𝑃(𝑋>4)$ when $\mu=4,\sigma=2.$
###Code
gr4sf=norm.sf(x=4, loc=3, scale=2)
gr4sf
fig, ax = plt.subplots()
x= np.arange(-4,10,0.001)
ax.plot(x, norm.pdf(x,loc=3,scale=2))
ax.set_title("N(3,$2^2$)")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
px=np.arange(4,10,0.01)
ax.set_ylim(0,0.25)
ax.fill_between(px,norm.pdf(px,loc=3,scale=2),alpha=0.5, color='b')
ax.text(4.5,0.02,"sf(x) %.2f" %(gr4sf), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
The above graph is the same as $1−𝑃(𝑋<4)$
###Code
gr4=norm.cdf(x=4, loc=3, scale=2)
gr14=1-gr4
fig, ax = plt.subplots()
x= np.arange(-4,10,0.001)
ax.plot(x, norm.pdf(x,loc=3,scale=2))
ax.set_title("N(3,$2^2$)")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
px=np.arange(4,10,0.01)
ax.set_ylim(0,0.25)
ax.fill_between(px,norm.pdf(px,loc=3,scale=2),alpha=0.5, color='b')
px1=np.arange(-4,4,0.01)
ax.fill_between(px1,norm.pdf(px1,loc=3,scale=2),alpha=0.5, color='r')
ax.text(4.5,0.02,round(gr14,2), fontsize=20)
ax.text(1,0.02,round(gr4,2), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
**Finding quantiles**$K$ in $P(X\leq K)=0.95$ is known as quantile, in this case, the 95% quantile. **Percent point function**ppf is the inverse of cdf and it is called the Percent point function. Given the mean of 1 and the standard deviation of 3, we can find the quantile a in $P(X<a)=0.506$ by using ppf.
###Code
norm.ppf(q=0.506, loc=1, scale=3)
fig, ax = plt.subplots()
x= np.arange(-10,10,0.001)
ax.plot(x, norm.pdf(x,loc=1,scale=3))
ax.set_title("N(1,$3^2$)")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
xpoint=norm.ppf(q=0.506, loc=1, scale=3)
px=np.arange(-10,xpoint,0.01)
ax.set_ylim(0,0.15)
ax.fill_between(px,norm.pdf(px,loc=1,scale=3),alpha=0.5, color='b')
ax.text(.8,0.02,"x= %.2f" %xpoint, fontsize=20)
ax.text(-5,0.05,"P(X)=0.506", fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
**Inverse survival function** With the same mean and standard deviation, we can find the quantile b in $P(X>b)=0.198$ using the inverse survival function isf. This is the same as using ppf with $q=(1−0.198).$
###Code
norm.isf(q=0.198, loc=1, scale=3)
norm.ppf(q=(1-0.198), loc=1, scale=3)
###Output
_____no_output_____
###Markdown
Interval around the meannorm.interval returns endpoints of the range that contains the alpha percent of the distribution. For example, with a mean of 0 and a standard deviation of 1 to find 95% of the probability, norm.interval returns x values around the mean, in this case, $\mu$=0
###Code
a,b = norm.interval(alpha=0.95, loc=0, scale=1)
print(a,b)
fig, ax = plt.subplots()
x= np.arange(-4,4,0.001)
ax.plot(x, norm.pdf(x))
ax.set_title("Interval")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
px=np.arange(a,b,0.01)
ax.set_ylim(0,0.5)
ax.fill_between(px,norm.pdf(px),alpha=0.5, color='b')
ax.text(-0.5,0.1,"0.95", fontsize=20)
plt.show()
###Output
_____no_output_____ |
karakterDiziMetod/encode.ipynb | ###Markdown
encode() Bu method sayesinde istediğimiz kodlama sistemine göre kodlayabilirizpython 3.x'te varsayılan karakter kodlaması utf-8'direncode() methodu sayesinde ise standart kodlama sistemini değiştirip cp1254 sisteminde kodlayabiliriz.
###Code
x = "çilek".encode("cp1254")
print(x)
txt = "My name is Stöle"
x = txt.encode()
print(x)
txt = "My name is Stöle"
x = txt.encode("utf-8")
print(x)
###Output
b'My name is St\xc3\xb6le'
|
caffe2h5.ipynb | ###Markdown
**Enable GPU Runtime by clicking *Runtime* on top left navigation bar and then clicking *Change Runtime Type* and selecting *GPU* as *Hardware Accelerator* and clicking save** **Mount your google drive to load the caffe model and protxt file**
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
**Installing Miniconda for easy and straightforward installation of Caffe**
###Code
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
###Output
PREFIX=/usr/local
installing: python-3.6.5-hc3d631a_2 ...
installing: ca-certificates-2018.03.07-0 ...
installing: conda-env-2.6.0-h36134e3_1 ...
installing: libgcc-ng-7.2.0-hdf63c60_3 ...
installing: libstdcxx-ng-7.2.0-hdf63c60_3 ...
installing: libffi-3.2.1-hd88cf55_4 ...
installing: ncurses-6.1-hf484d3e_0 ...
installing: openssl-1.0.2o-h20670df_0 ...
installing: tk-8.6.7-hc745277_3 ...
installing: xz-5.2.4-h14c3975_4 ...
installing: yaml-0.1.7-had09818_2 ...
installing: zlib-1.2.11-ha838bed_2 ...
installing: libedit-3.1.20170329-h6b74fdf_2 ...
installing: readline-7.0-ha6073c6_4 ...
installing: sqlite-3.23.1-he433501_0 ...
installing: asn1crypto-0.24.0-py36_0 ...
installing: certifi-2018.4.16-py36_0 ...
installing: chardet-3.0.4-py36h0f667ec_1 ...
installing: idna-2.6-py36h82fb2a8_1 ...
installing: pycosat-0.6.3-py36h0a5515d_0 ...
installing: pycparser-2.18-py36hf9f622e_1 ...
installing: pysocks-1.6.8-py36_0 ...
installing: ruamel_yaml-0.15.37-py36h14c3975_2 ...
installing: six-1.11.0-py36h372c433_1 ...
installing: cffi-1.11.5-py36h9745a5d_0 ...
installing: setuptools-39.2.0-py36_0 ...
installing: cryptography-2.2.2-py36h14c3975_0 ...
installing: wheel-0.31.1-py36_0 ...
installing: pip-10.0.1-py36_0 ...
installing: pyopenssl-18.0.0-py36_0 ...
installing: urllib3-1.22-py36hbe7ace6_0 ...
installing: requests-2.18.4-py36he2e5f8d_1 ...
installing: conda-4.5.4-py36_0 ...
installation finished.
WARNING:
You currently have a PYTHONPATH environment variable set. This may cause
unexpected behavior when running the Python interpreter in Miniconda3.
For best results, please verify that your PYTHONPATH only points to
directories of packages that are compatible with the Python interpreter
in Miniconda3: /usr/local
###Markdown
**Installing Python 3.6 and updating Conda**
###Code
%%bash
conda install --channel defaults conda python=3.6 --yes
conda update --channel defaults --all --yes
###Output
Solving environment: ...working... done
## Package Plan ##
environment location: /usr/local
added / updated specs:
- conda
- python=3.6
The following packages will be downloaded:
package | build
---------------------------|-----------------
ld_impl_linux-64-2.33.1 | h53a641e_7 645 KB
libffi-3.3 | he6710b0_2 54 KB
xz-5.2.5 | h7b6447c_0 438 KB
pysocks-1.7.1 | py36h06a4308_0 30 KB
ncurses-6.2 | he6710b0_1 1.1 MB
zlib-1.2.11 | h7b6447c_3 120 KB
brotlipy-0.7.0 |py36h27cfd23_1003 349 KB
pip-21.0.1 | py36h06a4308_0 2.0 MB
python-3.6.12 | hcff3b4d_2 34.0 MB
ca-certificates-2021.1.19 | h06a4308_0 128 KB
libgcc-ng-9.1.0 | hdf63c60_0 8.1 MB
six-1.15.0 | pyhd3eb1b0_0 13 KB
tqdm-4.56.0 | pyhd3eb1b0_0 76 KB
pycparser-2.20 | py_2 94 KB
sqlite-3.33.0 | h62c20be_0 2.0 MB
conda-package-handling-1.7.2| py36h03888b9_0 967 KB
idna-2.10 | pyhd3eb1b0_0 52 KB
cryptography-3.3.1 | py36h3c74f83_1 633 KB
tk-8.6.10 | hbc83047_0 3.2 MB
setuptools-52.0.0 | py36h06a4308_0 933 KB
ruamel_yaml-0.15.87 | py36h7b6447c_1 256 KB
pyopenssl-20.0.1 | pyhd3eb1b0_1 48 KB
certifi-2020.12.5 | py36h06a4308_0 144 KB
yaml-0.2.5 | h7b6447c_0 87 KB
conda-4.9.2 | py36h06a4308_0 3.1 MB
libedit-3.1.20191231 | h14c3975_1 121 KB
wheel-0.36.2 | pyhd3eb1b0_0 31 KB
_libgcc_mutex-0.1 | main 3 KB
openssl-1.1.1j | h27cfd23_0 3.8 MB
cffi-1.14.5 | py36h261ae71_0 224 KB
requests-2.25.1 | pyhd3eb1b0_0 51 KB
urllib3-1.26.3 | pyhd3eb1b0_0 99 KB
readline-8.1 | h27cfd23_0 464 KB
chardet-4.0.0 |py36h06a4308_1003 213 KB
pycosat-0.6.3 | py36h27cfd23_0 107 KB
libstdcxx-ng-9.1.0 | hdf63c60_0 4.0 MB
------------------------------------------------------------
Total: 67.7 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex: 0.1-main
brotlipy: 0.7.0-py36h27cfd23_1003
conda-package-handling: 1.7.2-py36h03888b9_0
ld_impl_linux-64: 2.33.1-h53a641e_7
tqdm: 4.56.0-pyhd3eb1b0_0
The following packages will be UPDATED:
ca-certificates: 2018.03.07-0 --> 2021.1.19-h06a4308_0
certifi: 2018.4.16-py36_0 --> 2020.12.5-py36h06a4308_0
cffi: 1.11.5-py36h9745a5d_0 --> 1.14.5-py36h261ae71_0
chardet: 3.0.4-py36h0f667ec_1 --> 4.0.0-py36h06a4308_1003
conda: 4.5.4-py36_0 --> 4.9.2-py36h06a4308_0
cryptography: 2.2.2-py36h14c3975_0 --> 3.3.1-py36h3c74f83_1
idna: 2.6-py36h82fb2a8_1 --> 2.10-pyhd3eb1b0_0
libedit: 3.1.20170329-h6b74fdf_2 --> 3.1.20191231-h14c3975_1
libffi: 3.2.1-hd88cf55_4 --> 3.3-he6710b0_2
libgcc-ng: 7.2.0-hdf63c60_3 --> 9.1.0-hdf63c60_0
libstdcxx-ng: 7.2.0-hdf63c60_3 --> 9.1.0-hdf63c60_0
ncurses: 6.1-hf484d3e_0 --> 6.2-he6710b0_1
openssl: 1.0.2o-h20670df_0 --> 1.1.1j-h27cfd23_0
pip: 10.0.1-py36_0 --> 21.0.1-py36h06a4308_0
pycosat: 0.6.3-py36h0a5515d_0 --> 0.6.3-py36h27cfd23_0
pycparser: 2.18-py36hf9f622e_1 --> 2.20-py_2
pyopenssl: 18.0.0-py36_0 --> 20.0.1-pyhd3eb1b0_1
pysocks: 1.6.8-py36_0 --> 1.7.1-py36h06a4308_0
python: 3.6.5-hc3d631a_2 --> 3.6.12-hcff3b4d_2
readline: 7.0-ha6073c6_4 --> 8.1-h27cfd23_0
requests: 2.18.4-py36he2e5f8d_1 --> 2.25.1-pyhd3eb1b0_0
ruamel_yaml: 0.15.37-py36h14c3975_2 --> 0.15.87-py36h7b6447c_1
setuptools: 39.2.0-py36_0 --> 52.0.0-py36h06a4308_0
six: 1.11.0-py36h372c433_1 --> 1.15.0-pyhd3eb1b0_0
sqlite: 3.23.1-he433501_0 --> 3.33.0-h62c20be_0
tk: 8.6.7-hc745277_3 --> 8.6.10-hbc83047_0
urllib3: 1.22-py36hbe7ace6_0 --> 1.26.3-pyhd3eb1b0_0
wheel: 0.31.1-py36_0 --> 0.36.2-pyhd3eb1b0_0
xz: 5.2.4-h14c3975_4 --> 5.2.5-h7b6447c_0
yaml: 0.1.7-had09818_2 --> 0.2.5-h7b6447c_0
zlib: 1.2.11-ha838bed_2 --> 1.2.11-h7b6447c_3
Downloading and Extracting Packages
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: /usr/local
The following packages will be downloaded:
package | build
---------------------------|-----------------
six-1.15.0 | py36h06a4308_0 27 KB
------------------------------------------------------------
Total: 27 KB
The following packages will be REMOVED:
asn1crypto-0.24.0-py36_0
conda-env-2.6.0-h36134e3_1
The following packages will be SUPERSEDED by a higher-priority channel:
six pkgs/main/noarch::six-1.15.0-pyhd3eb1~ --> pkgs/main/linux-64::six-1.15.0-py36h06a4308_0
Downloading and Extracting Packages
six-1.15.0 | 27 KB | | 0%
six-1.15.0 | 27 KB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
###Markdown
**Setting up required system paths**
###Code
import sys
sys.path
['',
'/env/python',
'/usr/lib/python36.zip',
'/usr/lib/python3.6',
'/usr/lib/python3.6/lib-dynload',
'/usr/local/lib/python3.6/dist-packages', # pre-installed packages
'/usr/lib/python3/dist-packages',
'/usr/local/lib/python3.6/dist-packages/IPython/extensions',
'/root/.ipython']
!ls /usr/local/lib/python3.6/dist-packages
_ = (sys.path
.append("/usr/local/lib/python3.6/site-packages"))
###Output
absl
absl_py-0.10.0.dist-info
alabaster
alabaster-0.7.12.dist-info
albumentations
albumentations-0.1.12.dist-info
altair
altair-4.1.0.dist-info
apiclient
appdirs-1.4.4.dist-info
appdirs.py
argon2
argon2_cffi-20.1.0.dist-info
asgiref
asgiref-3.3.1.dist-info
astor
astor-0.8.1.dist-info
astropy
astropy-4.1.dist-info
astunparse
astunparse-1.6.3.dist-info
async_generator
async_generator-1.10.dist-info
atari_py
atari_py-0.2.6.dist-info
atomicwrites
atomicwrites-1.4.0.dist-info
attr
attrs-20.3.0.dist-info
audioread
audioread-2.1.9.dist-info
autograd
autograd-1.3.dist-info
babel
Babel-2.9.0.dist-info
backcall
backcall-0.2.0.dist-info
beautifulsoup4-4.6.3.dist-info
bin
bleach
bleach-3.3.0.dist-info
blis
blis-0.4.1.dist-info
bokeh
bokeh-2.1.1.dist-info
bottleneck
Bottleneck-1.3.2.dist-info
branca
branca-0.4.2.dist-info
bs4
bs4-0.0.1.dist-info
bson
cachecontrol
CacheControl-0.12.6.dist-info
cachetools
cachetools-4.2.1.dist-info
caffe2
catalogue-1.0.0.dist-info
catalogue.py
certifi
certifi-2020.12.5.dist-info
cffi
cffi-1.14.4.dist-info
_cffi_backend.cpython-36m-x86_64-linux-gnu.so
cffi.libs
chainer
chainer-7.4.0.dist-info
chainermn
chainerx
chardet
chardet-3.0.4.dist-info
chess
click
click-7.1.2.dist-info
client
cloudpickle
cloudpickle-1.3.0.dist-info
cmake
cmake-3.12.0.dist-info
cmdstanpy
cmdstanpy-0.9.5.dist-info
colab
colorlover
colorlover-0.3.0.dist-info
community
community-1.0.0b1.dist-info
contextlib2-0.5.5.dist-info
contextlib2.py
convertdate
convertdate-2.3.0.dist-info
coverage
coverage-3.7.1.dist-info
coveralls
coveralls-0.5.dist-info
crcmod
crcmod-1.7.dist-info
cufflinks
cufflinks-0.17.3.dist-info
cupy
cupy_cuda101-7.4.0.dist-info
cupyx
cv2
_cvxcore.cpython-36m-x86_64-linux-gnu.so
cvxopt
cvxopt-1.2.5.dist-info
cvxopt.libs
cvxpy
cvxpy-1.0.31.dist-info
cycler-0.10.0.dist-info
cycler.py
cymem
cymem-2.0.5.dist-info
Cython
Cython-0.29.21.dist-info
cython.py
daft-0.0.4.dist-info
daft.py
dask
dask-2.12.0.dist-info
dataclasses-0.8.dist-info
dataclasses.py
datascience
datascience-0.10.6.dist-info
dateutil
debugpy
debugpy-1.0.0.dist-info
decorator-4.4.2.dist-info
decorator.py
defusedxml
defusedxml-0.6.0.dist-info
descartes
descartes-1.1.0.dist-info
dill
dill-0.3.3.dist-info
distributed
distributed-1.25.3.dist-info
_distutils_hack
distutils-precedence.pth
django
Django-3.1.6.dist-info
dlib-19.18.0.dist-info
dlib.cpython-36m-x86_64-linux-gnu.so
dm_tree-0.1.5.dist-info
docopt-0.6.2.dist-info
docopt.py
docs
docutils
docutils-0.16.dist-info
dopamine
dopamine_rl-1.0.5.dist-info
dot_parser.py
earthengine_api-0.1.238.dist-info
easydict
easydict-1.9.dist-info
ecos
ecos-2.0.7.post1.dist-info
_ecos.cpython-36m-x86_64-linux-gnu.so
editdistance
editdistance-0.5.3.dist-info
ee
en_core_web_sm
en_core_web_sm-2.2.5.dist-info
entrypoints-0.3.dist-info
entrypoints.py
ephem
ephem-3.7.7.1.dist-info
et_xmlfile
et_xmlfile-1.0.1.dist-info
examples
fa2
fa2-0.3.5.dist-info
fancyimpute
fancyimpute-0.4.3.dist-info
fastai
fastai-1.0.61.dist-info
fastdtw
fastdtw-0.3.4.dist-info
fastprogress
fastprogress-1.0.0.dist-info
fastrlock
fastrlock-0.5.dist-info
fbprophet
fbprophet-0.7.1-py3.6.egg-info
feather
feather_format-0.4.1.dist-info
filelock-3.0.12.dist-info
filelock.py
firebase_admin
firebase_admin-4.4.0.dist-info
fix_yahoo_finance
fix_yahoo_finance-0.0.22.dist-info
flask
Flask-1.1.2.dist-info
flatbuffers
flatbuffers-1.12.dist-info
folium
folium-0.8.3.dist-info
future
future-0.16.0.dist-info
gast
gast-0.3.3.dist-info
gdown
gdown-3.6.4.dist-info
gensim
gensim-3.6.0.dist-info
geographiclib
geographiclib-1.50.dist-info
geopy
geopy-1.17.0.dist-info
gin
gin_config-0.4.0.dist-info
github2pypi
glob2
glob2-0.7.dist-info
google
google-2.0.3.dist-info
googleapiclient
google_api_core-1.16.0.dist-info
google_api_core-1.16.0-py3.8-nspkg.pth
google_api_python_client-1.7.12.dist-info
googleapis_common_protos-1.52.0.dist-info
googleapis_common_protos-1.52.0-py3.8-nspkg.pth
google_auth-1.25.0.dist-info
google_auth-1.25.0-py3.9-nspkg.pth
google_auth_httplib2-0.0.4.dist-info
google_auth_httplib2.py
google_auth_oauthlib
google_auth_oauthlib-0.4.2.dist-info
google_cloud_bigquery-1.21.0.dist-info
google_cloud_bigquery-1.21.0-py3.6-nspkg.pth
google_cloud_bigquery_storage-1.1.0.dist-info
google_cloud_bigquery_storage-1.1.0-py3.8-nspkg.pth
google_cloud_core-1.0.3.dist-info
google_cloud_core-1.0.3-py3.6-nspkg.pth
google_cloud_datastore-1.8.0.dist-info
google_cloud_datastore-1.8.0-py3.6-nspkg.pth
google_cloud_firestore-1.7.0.dist-info
google_cloud_firestore-1.7.0-py3.8-nspkg.pth
google_cloud_language-1.2.0.dist-info
google_cloud_language-1.2.0-py3.6-nspkg.pth
google_cloud_storage-1.18.1.dist-info
google_cloud_storage-1.18.1-py3.7-nspkg.pth
google_cloud_translate-1.5.0.dist-info
google_cloud_translate-1.5.0-py3.6-nspkg.pth
google_colab-1.0.0.dist-info
google_colab-1.0.0-py3.6-nspkg.pth
google_drive_downloader
googledrivedownloader-0.4.dist-info
google_pasta-0.2.0.dist-info
google_resumable_media-0.4.1.dist-info
google_resumable_media-0.4.1-py3.6-nspkg.pth
googlesearch
graphviz
graphviz-0.10.1.dist-info
gridfs
grpc
grpcio-1.32.0.dist-info
gspread
gspread-3.0.1.dist-info
gspread_dataframe-3.0.8.dist-info
gspread_dataframe.py
gym
gym-0.17.3.dist-info
h5py
h5py-2.10.0.dist-info
HeapDict-1.0.1.dist-info
heapdict.py
helper
hijri_converter
hijri_converter-2.1.1.dist-info
holidays
holidays-0.10.5.2.dist-info
holoviews
holoviews-1.13.5.dist-info
html5lib
html5lib-1.0.1.dist-info
httpimport-0.5.18.dist-info
httpimport.py
httplib2
httplib2-0.17.4.dist-info
httplib2shim
httplib2shim-0.0.3.dist-info
humanize
humanize-0.5.1.dist-info
hyperopt
hyperopt-0.1.2.dist-info
ideep4py
ideep4py-2.0.0.post3.dist-info
idna
idna-2.10.dist-info
image
image-1.5.33.dist-info
imageio
imageio-2.4.1.dist-info
imagesize-1.2.0.dist-info
imagesize.py
imbalanced_learn-0.4.3.dist-info
imblearn
imblearn-0.0.dist-info
imgaug
imgaug-0.2.9.dist-info
importlib_metadata
importlib_metadata-3.4.0.dist-info
importlib_resources
importlib_resources-5.1.0.dist-info
imutils
imutils-0.5.4.dist-info
inflect-2.1.0.dist-info
inflect.py
iniconfig
iniconfig-1.1.1.dist-info
intel_openmp-2021.1.2.dist-info
intervaltree
intervaltree-2.1.0.dist-info
ipykernel
ipykernel-4.10.1.dist-info
ipykernel_launcher.py
IPython
ipython-5.5.0.dist-info
ipython_genutils
ipython_genutils-0.2.0.dist-info
ipython_sql-0.3.9.dist-info
ipywidgets
ipywidgets-7.6.3.dist-info
itsdangerous
itsdangerous-1.1.0.dist-info
jax
jax-0.2.9.dist-info
jaxlib
jaxlib-0.1.60+cuda101.dist-info
jdcal-1.4.1.dist-info
jdcal.py
jedi
jedi-0.18.0.dist-info
jieba
jieba-0.42.1.dist-info
jinja2
Jinja2-2.11.3.dist-info
joblib
joblib-1.0.0.dist-info
jpeg4py
jpeg4py-0.1.4.dist-info
jsonschema
jsonschema-2.6.0.dist-info
jupyter-1.0.0.dist-info
jupyter_client
jupyter_client-5.3.5.dist-info
jupyter_console
jupyter_console-5.2.0.dist-info
jupyter_core
jupyter_core-4.7.1.dist-info
jupyterlab_pygments
jupyterlab_pygments-0.1.2.dist-info
jupyterlab_widgets
jupyterlab_widgets-1.0.0.dist-info
jupyter.py
kaggle
kaggle-1.5.10.dist-info
kapre
kapre-0.1.3.1.dist-info
keras
Keras-2.4.3.dist-info
keras_preprocessing
Keras_Preprocessing-1.1.2.dist-info
keras_vis-0.4.1.dist-info
kiwisolver-1.3.1.dist-info
kiwisolver.cpython-36m-x86_64-linux-gnu.so
knnimpute
knnimpute-0.1.0.dist-info
korean_lunar_calendar
korean_lunar_calendar-0.2.1.dist-info
libfuturize
libpasteurize
librosa
librosa-0.8.0.dist-info
lightgbm
lightgbm-2.2.3.dist-info
llvmlite
llvmlite-0.34.0.dist-info
lmdb
lmdb-0.99.dist-info
lucid
lucid-0.3.8.dist-info
lunarcalendar
LunarCalendar-0.0.9.dist-info
lxml
lxml-4.2.6.dist-info
markdown
Markdown-3.3.3.dist-info
markupsafe
MarkupSafe-1.1.1.dist-info
matplotlib
matplotlib-3.2.2.dist-info
matplotlib-3.2.2-py3.6-nspkg.pth
matplotlib.libs
matplotlib_venn
matplotlib_venn-0.11.6.dist-info
missingno
missingno-0.4.2.dist-info
mistune-0.8.4.dist-info
mistune.py
mizani
mizani-0.6.0.dist-info
mkl-2019.0.dist-info
mlxtend
mlxtend-0.14.0.dist-info
more_itertools
more_itertools-8.7.0.dist-info
moviepy
moviepy-0.2.3.5.dist-info
mpl_toolkits
mpmath
mpmath-1.1.0.dist-info
msgpack
msgpack-1.0.2.dist-info
multiprocess
_multiprocess
multiprocess-0.70.11.1.dist-info
multitasking
multitasking-0.0.9.dist-info
murmurhash
murmurhash-1.0.5.dist-info
music21
music21-5.5.0.dist-info
natsort
natsort-5.5.0.dist-info
nbclient
nbclient-0.5.2.dist-info
nbconvert
nbconvert-5.6.1.dist-info
nbformat
nbformat-5.1.2.dist-info
nest_asyncio-1.5.1.dist-info
nest_asyncio.py
networkx
networkx-2.5.dist-info
nibabel
nibabel-3.0.2.dist-info
nisext
nltk
nltk-3.2.5.dist-info
notebook
notebook-5.3.1.dist-info
np_utils
np_utils-0.5.12.1.dist-info
numba
numba-0.51.2.dist-info
numbergen
numexpr
numexpr-2.7.2.dist-info
numpy
numpy-1.19.5.dist-info
numpy.libs
nvidia_ml_py3-7.352.0.dist-info
nvidia_smi.py
oauth2client
oauth2client-4.1.3.dist-info
oauthlib
oauthlib-3.1.0.dist-info
okgrade
okgrade-0.4.3.dist-info
onnx_chainer
opencv_contrib_python-4.1.2.30.dist-info
opencv_python-4.1.2.30.dist-info
OpenGL
openpyxl
openpyxl-2.5.9.dist-info
opt_einsum
opt_einsum-3.3.0.dist-info
osqp
osqp-0.6.2.post0.dist-info
osqppurepy
packaging
packaging-20.9.dist-info
palettable
palettable-3.3.0.dist-info
pandas
pandas-1.1.5.dist-info
pandas_datareader
pandas_datareader-0.9.0.dist-info
pandas_gbq
pandas_gbq-0.13.3.dist-info
pandas_profiling
pandas_profiling-1.4.1.dist-info
pandocfilters-1.4.3.dist-info
pandocfilters.py
panel
panel-0.9.7.dist-info
param
param-1.10.1.dist-info
parso
parso-0.8.1.dist-info
past
pasta
pathlib-1.0.1.dist-info
pathlib.py
patsy
patsy-0.5.1.dist-info
pexpect
pexpect-4.8.0.dist-info
pickleshare-0.7.5.dist-info
pickleshare.py
PIL
Pillow-7.0.0.dist-info
pip
pip-19.3.1.dist-info
piptools
pip_tools-4.5.1.dist-info
pkg_resources
plac-1.1.3.dist-info
plac_core.py
plac_ext.py
plac.py
plac_tk.py
plotly
plotly-4.4.1.dist-info
_plotly_future_
_plotly_utils
plotlywidget
plotnine
plotnine-0.6.0.dist-info
pluggy
pluggy-0.7.1.dist-info
pooch
pooch-1.3.0.dist-info
portpicker-1.3.1.dist-info
portpicker.py
prefetch_generator
prefetch_generator-1.0.1.dist-info
preshed
preshed-3.0.5.dist-info
prettytable
prettytable-2.0.0.dist-info
progressbar
progressbar2-3.38.0.dist-info
prometheus_client
prometheus_client-0.9.0.dist-info
promise
promise-2.3.dist-info
prompt_toolkit
prompt_toolkit-1.0.18.dist-info
protobuf-3.12.4.dist-info
protobuf-3.12.4-py3.6-nspkg.pth
psutil
psutil-5.4.8.dist-info
psycopg2
psycopg2-2.7.6.1.dist-info
ptyprocess
ptyprocess-0.7.0.dist-info
pvectorc.cpython-36m-x86_64-linux-gnu.so
py
py-1.10.0.dist-info
pyarrow
pyarrow-0.14.1.dist-info
pyasn1
pyasn1-0.4.8.dist-info
pyasn1_modules
pyasn1_modules-0.2.8.dist-info
__pycache__
pycocotools
pycocotools-2.0.2.dist-info
pycparser
pycparser-2.20.dist-info
pyct
pyct-0.4.8.dist-info
pydata_google_auth
pydata_google_auth-1.1.0.dist-info
pydot-1.3.0.dist-info
pydot_ng
pydot_ng-2.0.0.dist-info
pydotplus
pydotplus-2.0.2.dist-info
pydot.py
pydrive
PyDrive-1.3.1.dist-info
pyemd
pyemd-0.5.1.dist-info
pyglet
pyglet-1.5.0.dist-info
pygments
Pygments-2.6.1.dist-info
pylab.py
pymc3
pymc3-3.7.dist-info
pymeeus
PyMeeus-0.3.7.dist-info
pymongo
pymongo-3.11.3.dist-info
pymystem3
pymystem3-0.2.0.dist-info
pynndescent
pynndescent-0.5.1.dist-info
pynvml.py
PyOpenGL-3.1.5.dist-info
pyparsing-2.4.7.dist-info
pyparsing.py
pyrsistent
pyrsistent-0.17.3.dist-info
_pyrsistent_version.py
pysndfile
pysndfile-1.3.8.dist-info
PySocks-1.7.1.dist-info
pystan
pystan-2.19.1.1.dist-info
_pytest
pytest-3.6.4.dist-info
pytest.py
python_chess-0.23.11.dist-info
python_dateutil-2.8.1.dist-info
python_louvain-0.15.dist-info
python_slugify-4.0.1.dist-info
python_utils
python_utils-2.5.6.dist-info
pytz
pytz-2018.9.dist-info
pyviz_comms
pyviz_comms-2.0.1.dist-info
PyWavelets-1.1.1.dist-info
pywt
pyximport
PyYAML-3.13.dist-info
pyzmq-22.0.2.dist-info
pyzmq.libs
qdldl-0.1.5.post0.dist-info
qdldl.cpython-36m-x86_64-linux-gnu.so
qtconsole
qtconsole-5.0.2.dist-info
qtpy
QtPy-1.9.0.dist-info
regex
regex-2019.12.20.dist-info
requests
requests-2.23.0.dist-info
requests_oauthlib
requests_oauthlib-1.3.0.dist-info
resampy
resampy-0.2.2.dist-info
retrying-1.3.3.dist-info
retrying.py
_rinterface_cffi_abi.py
_rinterface_cffi_api.abi3.so
rpy2
rpy2-3.2.7.dist-info
rsa
rsa-4.7.dist-info
samples
scikit_image-0.16.2.dist-info
scikit_learn-0.22.2.post1.dist-info
scipy
scipy-1.4.1.dist-info
scs
scs-2.1.2.dist-info
_scs_direct.cpython-36m-x86_64-linux-gnu.so
_scs_indirect.cpython-36m-x86_64-linux-gnu.so
_scs_python.cpython-36m-x86_64-linux-gnu.so
seaborn
seaborn-0.11.1.dist-info
send2trash
Send2Trash-1.5.0.dist-info
setuptools
setuptools-53.0.0.dist-info
setuptools_git
setuptools_git-1.2.dist-info
shapely
Shapely-1.7.1.dist-info
simplegeneric-0.8.1.dist-info
simplegeneric.py
six-1.15.0.dist-info
six.py
skimage
sklearn
sklearn-0.0.dist-info
sklearn_pandas
sklearn_pandas-1.8.0.dist-info
slugify
smart_open
smart_open-4.1.2.dist-info
snowballstemmer
snowballstemmer-2.1.0.dist-info
sockshandler.py
socks.py
sortedcontainers
sortedcontainers-2.3.0.dist-info
SoundFile-0.10.3.post1.dist-info
_soundfile.py
soundfile.py
spacy
spacy-2.2.4.dist-info
sphinx
Sphinx-1.8.5.dist-info
sphinxcontrib
sphinxcontrib_serializinghtml-1.1.4.dist-info
sphinxcontrib_serializinghtml-1.1.4-py3.8-nspkg.pth
sphinxcontrib_websupport-1.2.4.dist-info
sphinxcontrib_websupport-1.2.4-py3.8-nspkg.pth
sql
sqlalchemy
SQLAlchemy-1.3.23.dist-info
sqlparse
sqlparse-0.4.1.dist-info
srsly
srsly-1.0.5.dist-info
statsmodels
statsmodels-0.10.2.dist-info
sympy
sympy-1.1.1.dist-info
tables
tables-3.4.4.dist-info
tabulate-0.8.7.dist-info
tabulate.py
tblib
tblib-1.7.0.dist-info
tensorboard
tensorboard-2.4.1.dist-info
tensorboard_plugin_wit
tensorboard_plugin_wit-1.8.0.dist-info
tensorflow
tensorflow-2.4.1.dist-info
tensorflow_datasets
tensorflow_datasets-4.0.1.dist-info
tensorflow_estimator
tensorflow_estimator-2.4.0.dist-info
tensorflow_gcs_config
tensorflow_gcs_config-2.4.0.dist-info
tensorflow_hub
tensorflow_hub-0.11.0.dist-info
tensorflow_metadata
tensorflow_metadata-0.27.0.dist-info
tensorflow_probability
tensorflow_probability-0.12.1.dist-info
termcolor-1.1.0.dist-info
termcolor.py
terminado
terminado-0.9.2.dist-info
test
testpath
testpath-0.4.4.dist-info
tests
textblob
textblob-0.15.3.dist-info
textgenrnn
textgenrnn-1.4.1.dist-info
text_unidecode
text_unidecode-1.3.dist-info
theano
Theano-1.0.5.dist-info
thinc
thinc-7.4.0.dist-info
tifffile
tifffile-2020.9.3.dist-info
tlz
toml
toml-0.10.2.dist-info
toolz
toolz-0.11.1.dist-info
torch
torch-1.7.0+cu101.dist-info
torchsummary
torchsummary-1.5.1.dist-info
torchtext
torchtext-0.3.1.dist-info
torchvision
torchvision-0.8.1+cu101.dist-info
torchvision.libs
tornado
tornado-5.1.1.dist-info
tqdm
tqdm-4.41.1.dist-info
traitlets
traitlets-4.3.3.dist-info
tree
tweepy
tweepy-3.6.0.dist-info
typeguard
typeguard-2.7.1.dist-info
typing_extensions-3.7.4.3.dist-info
typing_extensions.py
tzlocal
tzlocal-1.5.1.dist-info
umap
umap_learn-0.5.0.dist-info
uritemplate
uritemplate-3.0.1.dist-info
urllib3
urllib3-1.24.3.dist-info
vega_datasets
vega_datasets-0.9.0.dist-info
vis
wasabi
wasabi-0.8.2.dist-info
wcwidth
wcwidth-0.2.5.dist-info
webencodings
webencodings-0.5.1.dist-info
werkzeug
Werkzeug-1.0.1.dist-info
wheel
wheel-0.36.2.dist-info
widgetsnbextension
widgetsnbextension-3.5.1.dist-info
wordcloud
wordcloud-1.5.0.dist-info
wrapt
wrapt-1.12.1.dist-info
xarray
xarray-0.15.1.dist-info
xgboost
xgboost-0.90.dist-info
xlrd
xlrd-1.1.0.dist-info
xlwt
xlwt-1.3.0.dist-info
yaml
yellowbrick
yellowbrick-0.9.1.dist-info
zict
zict-2.0.0.dist-info
zipp-3.4.0.dist-info
zipp.py
zmq
###Markdown
**Installing Caffe using Conda. Also, if anything else need to be installed using Conda Use *--yes* keyword**
###Code
!conda install -c anaconda caffe --yes
###Output
Collecting package metadata (current_repodata.json): - \ | / - \ | / - \ | / - \ done
Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with initial frozen solve. Retrying with flexible solve.
Solving environment: / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
Solving environment: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done
## Package Plan ##
environment location: /usr/local
added / updated specs:
- caffe
The following packages will be downloaded:
package | build
---------------------------|-----------------
backcall-0.2.0 | py_0 14 KB anaconda
blas-1.0 | mkl 6 KB anaconda
boost-1.67.0 | py36_4 11 KB anaconda
bzip2-1.0.8 | h7b6447c_0 105 KB anaconda
ca-certificates-2020.10.14 | 0 128 KB anaconda
caffe-1.0 | py36hbab4207_5 5.6 MB anaconda
cairo-1.14.12 | h8948797_3 1.3 MB anaconda
certifi-2020.6.20 | py36_0 160 KB anaconda
cloudpickle-1.6.0 | py_0 29 KB anaconda
cycler-0.10.0 | py36_0 13 KB anaconda
cytoolz-0.11.0 | py36h7b6447c_0 376 KB anaconda
dask-core-2.30.0 | py_0 639 KB anaconda
dbus-1.13.18 | hb2f20db_0 586 KB anaconda
decorator-4.4.2 | py_0 14 KB anaconda
expat-2.2.10 | he6710b0_2 192 KB anaconda
ffmpeg-4.0 | hcdf2ecd_0 73.7 MB anaconda
fontconfig-2.13.0 | h9420a91_0 291 KB anaconda
freeglut-3.0.0 | hf484d3e_5 251 KB anaconda
freetype-2.10.4 | h5ab3b9f_0 901 KB anaconda
gflags-2.2.2 | he6710b0_0 160 KB anaconda
glib-2.56.2 | hd408876_0 5.0 MB anaconda
glog-0.3.5 | hf484d3e_1 138 KB
graphite2-1.3.14 | h23475e2_0 102 KB anaconda
gst-plugins-base-1.14.0 | hbbd80ab_1 6.3 MB anaconda
gstreamer-1.14.0 | hb453b48_1 3.8 MB anaconda
h5py-2.8.0 | py36h989c5e5_3 1.1 MB anaconda
harfbuzz-1.8.8 | hffaf4a1_0 863 KB anaconda
hdf5-1.10.2 | hba1933b_1 5.2 MB anaconda
icu-58.2 | he6710b0_3 22.7 MB anaconda
imageio-2.9.0 | py_0 3.1 MB anaconda
intel-openmp-2020.2 | 254 947 KB anaconda
ipython-7.16.1 | py36h5ca1d4c_0 1.1 MB anaconda
ipython_genutils-0.2.0 | py36_0 39 KB anaconda
jasper-2.0.14 | h07fcdf6_1 1.1 MB anaconda
jedi-0.17.2 | py36_0 952 KB anaconda
jpeg-9b | habf39ab_1 247 KB anaconda
kiwisolver-1.2.0 | py36hfd86e86_0 91 KB anaconda
lcms2-2.11 | h396b838_0 419 KB anaconda
leveldb-1.20 | hf484d3e_1 253 KB
libboost-1.67.0 | h46d08c1_4 20.9 MB anaconda
libgfortran-ng-7.3.0 | hdf63c60_0 1.3 MB anaconda
libglu-9.0.0 | hf484d3e_1 377 KB anaconda
libopencv-3.4.2 | hb342d67_1 40.4 MB anaconda
libopus-1.3.1 | h7b6447c_0 570 KB anaconda
libpng-1.6.37 | hbc83047_0 364 KB anaconda
libprotobuf-3.13.0.1 | hd408876_0 2.3 MB anaconda
libtiff-4.1.0 | h2733197_1 607 KB anaconda
libuuid-1.0.3 | h1bed415_2 16 KB anaconda
libvpx-1.7.0 | h439df22_0 2.4 MB anaconda
libxcb-1.14 | h7b6447c_0 610 KB anaconda
libxml2-2.9.10 | hb55368b_3 1.3 MB anaconda
lmdb-0.9.24 | he6710b0_0 680 KB anaconda
lz4-c-1.9.2 | heb0550a_3 203 KB anaconda
matplotlib-3.3.1 | 0 24 KB anaconda
matplotlib-base-3.3.1 | py36h817c723_0 6.7 MB anaconda
mkl-2019.4 | 243 204.1 MB anaconda
mkl-service-2.3.0 | py36he904b0f_0 208 KB anaconda
mkl_fft-1.2.0 | py36h23d657b_0 164 KB anaconda
mkl_random-1.1.0 | py36hd6b4f25_0 369 KB anaconda
networkx-2.5 | py_0 1.2 MB anaconda
numpy-1.19.1 | py36hbc911f0_0 20 KB anaconda
numpy-base-1.19.1 | py36hfa32c7d_0 5.2 MB anaconda
olefile-0.46 | py36_0 48 KB anaconda
pandas-1.1.3 | py36he6710b0_0 10.5 MB anaconda
parso-0.7.0 | py_0 71 KB anaconda
pcre-8.44 | he6710b0_0 269 KB anaconda
pexpect-4.8.0 | py36_0 84 KB anaconda
pickleshare-0.7.5 | py36_0 13 KB anaconda
pillow-8.0.0 | py36h9a89aac_0 675 KB anaconda
pixman-0.40.0 | h7b6447c_0 628 KB anaconda
prompt-toolkit-3.0.8 | py_0 244 KB anaconda
protobuf-3.13.0.1 | py36he6710b0_1 698 KB anaconda
ptyprocess-0.6.0 | py36_0 23 KB anaconda
py-boost-1.67.0 | py36h04863e7_4 302 KB anaconda
py-opencv-3.4.2 | py36hb342d67_1 1.2 MB anaconda
pygments-2.7.1 | py_0 704 KB anaconda
pyparsing-2.4.7 | py_0 64 KB anaconda
pyqt-5.9.2 | py36h22d08a2_1 5.6 MB anaconda
python-dateutil-2.8.1 | py_0 224 KB anaconda
python-gflags-3.1.2 | py36_0 70 KB anaconda
python-leveldb-0.201 | py36he6710b0_0 27 KB anaconda
pytz-2020.1 | py_0 239 KB anaconda
pywavelets-1.1.1 | py36h7b6447c_2 4.4 MB anaconda
pyyaml-5.3.1 | py36h7b6447c_1 191 KB anaconda
qt-5.9.7 | h5867ecd_1 85.9 MB anaconda
scikit-image-0.17.2 | py36hdf5156a_0 10.8 MB anaconda
scipy-1.5.2 | py36h0b6359f_0 18.5 MB anaconda
sip-4.19.24 | py36he6710b0_0 297 KB anaconda
snappy-1.1.8 | he6710b0_0 43 KB anaconda
tifffile-2020.10.1 | py36hdd07704_2 272 KB anaconda
toolz-0.11.1 | py_0 47 KB anaconda
tornado-6.0.4 | py36h7b6447c_1 650 KB anaconda
traitlets-4.3.3 | py36_0 137 KB anaconda
wcwidth-0.2.5 | py_0 37 KB anaconda
zstd-1.4.4 | h0b5b093_3 1006 KB anaconda
------------------------------------------------------------
Total: 571.4 MB
The following NEW packages will be INSTALLED:
backcall anaconda/noarch::backcall-0.2.0-py_0
blas anaconda/linux-64::blas-1.0-mkl
boost anaconda/linux-64::boost-1.67.0-py36_4
bzip2 anaconda/linux-64::bzip2-1.0.8-h7b6447c_0
caffe anaconda/linux-64::caffe-1.0-py36hbab4207_5
cairo anaconda/linux-64::cairo-1.14.12-h8948797_3
cloudpickle anaconda/noarch::cloudpickle-1.6.0-py_0
cycler anaconda/linux-64::cycler-0.10.0-py36_0
cytoolz anaconda/linux-64::cytoolz-0.11.0-py36h7b6447c_0
dask-core anaconda/noarch::dask-core-2.30.0-py_0
dbus anaconda/linux-64::dbus-1.13.18-hb2f20db_0
decorator anaconda/noarch::decorator-4.4.2-py_0
expat anaconda/linux-64::expat-2.2.10-he6710b0_2
ffmpeg anaconda/linux-64::ffmpeg-4.0-hcdf2ecd_0
fontconfig anaconda/linux-64::fontconfig-2.13.0-h9420a91_0
freeglut anaconda/linux-64::freeglut-3.0.0-hf484d3e_5
freetype anaconda/linux-64::freetype-2.10.4-h5ab3b9f_0
gflags anaconda/linux-64::gflags-2.2.2-he6710b0_0
glib anaconda/linux-64::glib-2.56.2-hd408876_0
glog pkgs/main/linux-64::glog-0.3.5-hf484d3e_1
graphite2 anaconda/linux-64::graphite2-1.3.14-h23475e2_0
gst-plugins-base anaconda/linux-64::gst-plugins-base-1.14.0-hbbd80ab_1
gstreamer anaconda/linux-64::gstreamer-1.14.0-hb453b48_1
h5py anaconda/linux-64::h5py-2.8.0-py36h989c5e5_3
harfbuzz anaconda/linux-64::harfbuzz-1.8.8-hffaf4a1_0
hdf5 anaconda/linux-64::hdf5-1.10.2-hba1933b_1
icu anaconda/linux-64::icu-58.2-he6710b0_3
imageio anaconda/noarch::imageio-2.9.0-py_0
intel-openmp anaconda/linux-64::intel-openmp-2020.2-254
ipython anaconda/linux-64::ipython-7.16.1-py36h5ca1d4c_0
ipython_genutils anaconda/linux-64::ipython_genutils-0.2.0-py36_0
jasper anaconda/linux-64::jasper-2.0.14-h07fcdf6_1
jedi anaconda/linux-64::jedi-0.17.2-py36_0
jpeg anaconda/linux-64::jpeg-9b-habf39ab_1
kiwisolver anaconda/linux-64::kiwisolver-1.2.0-py36hfd86e86_0
lcms2 anaconda/linux-64::lcms2-2.11-h396b838_0
leveldb pkgs/main/linux-64::leveldb-1.20-hf484d3e_1
libboost anaconda/linux-64::libboost-1.67.0-h46d08c1_4
libgfortran-ng anaconda/linux-64::libgfortran-ng-7.3.0-hdf63c60_0
libglu anaconda/linux-64::libglu-9.0.0-hf484d3e_1
libopencv anaconda/linux-64::libopencv-3.4.2-hb342d67_1
libopus anaconda/linux-64::libopus-1.3.1-h7b6447c_0
libpng anaconda/linux-64::libpng-1.6.37-hbc83047_0
libprotobuf anaconda/linux-64::libprotobuf-3.13.0.1-hd408876_0
libtiff anaconda/linux-64::libtiff-4.1.0-h2733197_1
libuuid anaconda/linux-64::libuuid-1.0.3-h1bed415_2
libvpx anaconda/linux-64::libvpx-1.7.0-h439df22_0
libxcb anaconda/linux-64::libxcb-1.14-h7b6447c_0
libxml2 anaconda/linux-64::libxml2-2.9.10-hb55368b_3
lmdb anaconda/linux-64::lmdb-0.9.24-he6710b0_0
lz4-c anaconda/linux-64::lz4-c-1.9.2-heb0550a_3
matplotlib anaconda/linux-64::matplotlib-3.3.1-0
matplotlib-base anaconda/linux-64::matplotlib-base-3.3.1-py36h817c723_0
mkl anaconda/linux-64::mkl-2019.4-243
mkl-service anaconda/linux-64::mkl-service-2.3.0-py36he904b0f_0
mkl_fft anaconda/linux-64::mkl_fft-1.2.0-py36h23d657b_0
mkl_random anaconda/linux-64::mkl_random-1.1.0-py36hd6b4f25_0
networkx anaconda/noarch::networkx-2.5-py_0
numpy anaconda/linux-64::numpy-1.19.1-py36hbc911f0_0
numpy-base anaconda/linux-64::numpy-base-1.19.1-py36hfa32c7d_0
olefile anaconda/linux-64::olefile-0.46-py36_0
pandas anaconda/linux-64::pandas-1.1.3-py36he6710b0_0
parso anaconda/noarch::parso-0.7.0-py_0
pcre anaconda/linux-64::pcre-8.44-he6710b0_0
pexpect anaconda/linux-64::pexpect-4.8.0-py36_0
pickleshare anaconda/linux-64::pickleshare-0.7.5-py36_0
pillow anaconda/linux-64::pillow-8.0.0-py36h9a89aac_0
pixman anaconda/linux-64::pixman-0.40.0-h7b6447c_0
prompt-toolkit anaconda/noarch::prompt-toolkit-3.0.8-py_0
protobuf anaconda/linux-64::protobuf-3.13.0.1-py36he6710b0_1
ptyprocess anaconda/linux-64::ptyprocess-0.6.0-py36_0
py-boost anaconda/linux-64::py-boost-1.67.0-py36h04863e7_4
py-opencv anaconda/linux-64::py-opencv-3.4.2-py36hb342d67_1
pygments anaconda/noarch::pygments-2.7.1-py_0
pyparsing anaconda/noarch::pyparsing-2.4.7-py_0
pyqt anaconda/linux-64::pyqt-5.9.2-py36h22d08a2_1
python-dateutil anaconda/noarch::python-dateutil-2.8.1-py_0
python-gflags anaconda/linux-64::python-gflags-3.1.2-py36_0
python-leveldb anaconda/linux-64::python-leveldb-0.201-py36he6710b0_0
pytz anaconda/noarch::pytz-2020.1-py_0
pywavelets anaconda/linux-64::pywavelets-1.1.1-py36h7b6447c_2
pyyaml anaconda/linux-64::pyyaml-5.3.1-py36h7b6447c_1
qt anaconda/linux-64::qt-5.9.7-h5867ecd_1
scikit-image anaconda/linux-64::scikit-image-0.17.2-py36hdf5156a_0
scipy anaconda/linux-64::scipy-1.5.2-py36h0b6359f_0
sip anaconda/linux-64::sip-4.19.24-py36he6710b0_0
snappy anaconda/linux-64::snappy-1.1.8-he6710b0_0
tifffile anaconda/linux-64::tifffile-2020.10.1-py36hdd07704_2
toolz anaconda/noarch::toolz-0.11.1-py_0
tornado anaconda/linux-64::tornado-6.0.4-py36h7b6447c_1
traitlets anaconda/linux-64::traitlets-4.3.3-py36_0
wcwidth anaconda/noarch::wcwidth-0.2.5-py_0
zstd anaconda/linux-64::zstd-1.4.4-h0b5b093_3
The following packages will be SUPERSEDED by a higher-priority channel:
ca-certificates pkgs/main::ca-certificates-2021.1.19-~ --> anaconda::ca-certificates-2020.10.14-0
certifi pkgs/main::certifi-2020.12.5-py36h06a~ --> anaconda::certifi-2020.6.20-py36_0
Downloading and Extracting Packages
jasper-2.0.14 | 1.1 MB | : 100% 1.0/1 [00:00<00:00, 4.15it/s]
matplotlib-3.3.1 | 24 KB | : 100% 1.0/1 [00:00<00:00, 33.22it/s]
mkl_fft-1.2.0 | 164 KB | : 100% 1.0/1 [00:00<00:00, 18.08it/s]
icu-58.2 | 22.7 MB | : 100% 1.0/1 [00:03<00:00, 3.19s/it]
prompt-toolkit-3.0.8 | 244 KB | : 100% 1.0/1 [00:00<00:00, 11.88it/s]
python-leveldb-0.201 | 27 KB | : 100% 1.0/1 [00:00<00:00, 31.73it/s]
sip-4.19.24 | 297 KB | : 100% 1.0/1 [00:00<00:00, 13.95it/s]
libopencv-3.4.2 | 40.4 MB | : 100% 1.0/1 [00:06<00:00, 6.70s/it]
dbus-1.13.18 | 586 KB | : 100% 1.0/1 [00:00<00:00, 8.47it/s]
blas-1.0 | 6 KB | : 100% 1.0/1 [00:00<00:00, 34.24it/s]
tifffile-2020.10.1 | 272 KB | : 100% 1.0/1 [00:00<00:00, 14.02it/s]
gstreamer-1.14.0 | 3.8 MB | : 100% 1.0/1 [00:00<00:00, 1.71it/s]
tornado-6.0.4 | 650 KB | : 100% 1.0/1 [00:00<00:00, 5.96it/s]
ipython-7.16.1 | 1.1 MB | : 100% 1.0/1 [00:00<00:00, 2.74it/s]
kiwisolver-1.2.0 | 91 KB | : 100% 1.0/1 [00:00<00:00, 14.16it/s]
hdf5-1.10.2 | 5.2 MB | : 100% 1.0/1 [00:00<00:00, 1.12it/s]
python-dateutil-2.8. | 224 KB | : 100% 1.0/1 [00:00<00:00, 20.53it/s]
gst-plugins-base-1.1 | 6.3 MB | : 100% 1.0/1 [00:00<00:00, 1.14it/s]
libopus-1.3.1 | 570 KB | : 100% 1.0/1 [00:00<00:00, 8.61it/s]
numpy-base-1.19.1 | 5.2 MB | : 100% 1.0/1 [00:01<00:00, 1.03s/it]
intel-openmp-2020.2 | 947 KB | : 100% 1.0/1 [00:00<00:00, 6.15it/s]
numpy-1.19.1 | 20 KB | : 100% 1.0/1 [00:00<00:00, 42.46it/s]
mkl-2019.4 | 204.1 MB | : 100% 1.0/1 [00:32<00:00, 32.78s/it]
networkx-2.5 | 1.2 MB | : 100% 1.0/1 [00:00<00:00, 3.49it/s]
scikit-image-0.17.2 | 10.8 MB | : 100% 1.0/1 [00:01<00:00, 1.50s/it]
toolz-0.11.1 | 47 KB | : 100% 1.0/1 [00:00<00:00, 19.33it/s]
protobuf-3.13.0.1 | 698 KB | : 100% 1.0/1 [00:00<00:00, 4.66it/s]
parso-0.7.0 | 71 KB | : 100% 1.0/1 [00:00<00:00, 18.65it/s]
python-gflags-3.1.2 | 70 KB | : 100% 1.0/1 [00:00<00:00, 23.50it/s]
ptyprocess-0.6.0 | 23 KB | : 100% 1.0/1 [00:00<00:00, 31.26it/s]
libboost-1.67.0 | 20.9 MB | : 100% 1.0/1 [00:05<00:00, 5.85s/it]
pcre-8.44 | 269 KB | : 100% 1.0/1 [00:00<00:00, 14.11it/s]
libglu-9.0.0 | 377 KB | : 100% 1.0/1 [00:00<00:00, 12.10it/s]
pyparsing-2.4.7 | 64 KB | : 100% 1.0/1 [00:00<00:00, 13.12it/s]
libgfortran-ng-7.3.0 | 1.3 MB | : 100% 1.0/1 [00:00<00:00, 4.40it/s]
lcms2-2.11 | 419 KB | : 100% 1.0/1 [00:00<00:00, 11.13it/s]
py-opencv-3.4.2 | 1.2 MB | : 100% 1.0/1 [00:00<00:00, 4.33it/s]
olefile-0.46 | 48 KB | : 100% 1.0/1 [00:00<00:00, 22.36it/s]
pywavelets-1.1.1 | 4.4 MB | : 100% 1.0/1 [00:00<00:00, 1.78it/s]
cytoolz-0.11.0 | 376 KB | : 100% 1.0/1 [00:00<00:00, 7.99it/s]
certifi-2020.6.20 | 160 KB | : 100% 1.0/1 [00:00<00:00, 19.85it/s]
libpng-1.6.37 | 364 KB | : 100% 1.0/1 [00:00<00:00, 12.44it/s]
matplotlib-base-3.3. | 6.7 MB | : 100% 1.0/1 [00:01<00:00, 1.16s/it]
wcwidth-0.2.5 | 37 KB | : 100% 1.0/1 [00:00<00:00, 11.65it/s]
ffmpeg-4.0 | 73.7 MB | : 100% 1.0/1 [00:10<00:00, 10.21s/it]
h5py-2.8.0 | 1.1 MB | : 100% 1.0/1 [00:00<00:00, 4.81it/s]
freetype-2.10.4 | 901 KB | : 100% 1.0/1 [00:00<00:00, 4.95it/s]
ca-certificates-2020 | 128 KB | : 100% 1.0/1 [00:00<00:00, 17.05it/s]
decorator-4.4.2 | 14 KB | : 100% 1.0/1 [00:00<00:00, 37.33it/s]
zstd-1.4.4 | 1006 KB | : 100% 1.0/1 [00:00<00:00, 5.32it/s]
jpeg-9b | 247 KB | : 100% 1.0/1 [00:00<00:00, 14.12it/s]
backcall-0.2.0 | 14 KB | : 100% 1.0/1 [00:00<00:00, 27.31it/s]
libxml2-2.9.10 | 1.3 MB | : 100% 1.0/1 [00:00<00:00, 3.04it/s]
ipython_genutils-0.2 | 39 KB | : 100% 1.0/1 [00:00<00:00, 16.29it/s]
lmdb-0.9.24 | 680 KB | : 100% 1.0/1 [00:00<00:00, 7.56it/s]
fontconfig-2.13.0 | 291 KB | : 100% 1.0/1 [00:00<00:00, 12.28it/s]
cairo-1.14.12 | 1.3 MB | : 100% 1.0/1 [00:00<00:00, 3.74it/s]
leveldb-1.20 | 253 KB | : 100% 1.0/1 [00:00<00:00, 10.69it/s]
expat-2.2.10 | 192 KB | : 100% 1.0/1 [00:00<00:00, 14.64it/s]
traitlets-4.3.3 | 137 KB | : 100% 1.0/1 [00:00<00:00, 15.03it/s]
pixman-0.40.0 | 628 KB | : 100% 1.0/1 [00:00<00:00, 7.74it/s]
snappy-1.1.8 | 43 KB | : 100% 1.0/1 [00:00<00:00, 28.09it/s]
harfbuzz-1.8.8 | 863 KB | : 100% 1.0/1 [00:00<00:00, 6.69it/s]
pyqt-5.9.2 | 5.6 MB | : 100% 1.0/1 [00:01<00:00, 1.08s/it]
libuuid-1.0.3 | 16 KB | : 100% 1.0/1 [00:00<00:00, 35.79it/s]
graphite2-1.3.14 | 102 KB | : 100% 1.0/1 [00:00<00:00, 24.59it/s]
glog-0.3.5 | 138 KB | : 100% 1.0/1 [00:00<00:00, 16.80it/s]
qt-5.9.7 | 85.9 MB | : 100% 1.0/1 [00:13<00:00, 13.59s/it]
pexpect-4.8.0 | 84 KB | : 100% 1.0/1 [00:00<00:00, 20.38it/s]
libxcb-1.14 | 610 KB | : 100% 1.0/1 [00:00<00:00, 5.72it/s]
py-boost-1.67.0 | 302 KB | : 100% 1.0/1 [00:00<00:00, 10.05it/s]
lz4-c-1.9.2 | 203 KB | : 100% 1.0/1 [00:00<00:00, 14.08it/s]
libprotobuf-3.13.0.1 | 2.3 MB | : 100% 1.0/1 [00:00<00:00, 2.43it/s]
pyyaml-5.3.1 | 191 KB | : 100% 1.0/1 [00:00<00:00, 17.92it/s]
libtiff-4.1.0 | 607 KB | : 100% 1.0/1 [00:00<00:00, 8.77it/s]
cloudpickle-1.6.0 | 29 KB | : 100% 1.0/1 [00:00<00:00, 33.95it/s]
pygments-2.7.1 | 704 KB | : 100% 1.0/1 [00:00<00:00, 6.08it/s]
libvpx-1.7.0 | 2.4 MB | : 100% 1.0/1 [00:00<00:00, 2.65it/s]
boost-1.67.0 | 11 KB | : 100% 1.0/1 [00:00<00:00, 24.44it/s]
scipy-1.5.2 | 18.5 MB | : 100% 1.0/1 [00:02<00:00, 2.93s/it]
mkl_random-1.1.0 | 369 KB | : 100% 1.0/1 [00:00<00:00, 12.41it/s]
pickleshare-0.7.5 | 13 KB | : 100% 1.0/1 [00:00<00:00, 34.28it/s]
glib-2.56.2 | 5.0 MB | : 100% 1.0/1 [00:01<00:00, 1.22s/it]
pytz-2020.1 | 239 KB | : 100% 1.0/1 [00:00<00:00, 9.18it/s]
caffe-1.0 | 5.6 MB | : 100% 1.0/1 [00:00<00:00, 1.10it/s]
cycler-0.10.0 | 13 KB | : 100% 1.0/1 [00:00<00:00, 41.30it/s]
freeglut-3.0.0 | 251 KB | : 100% 1.0/1 [00:00<00:00, 14.45it/s]
gflags-2.2.2 | 160 KB | : 100% 1.0/1 [00:00<00:00, 16.75it/s]
mkl-service-2.3.0 | 208 KB | : 100% 1.0/1 [00:00<00:00, 16.38it/s]
imageio-2.9.0 | 3.1 MB | : 100% 1.0/1 [00:00<00:00, 2.57it/s]
jedi-0.17.2 | 952 KB | : 100% 1.0/1 [00:00<00:00, 2.47it/s]
dask-core-2.30.0 | 639 KB | : 100% 1.0/1 [00:00<00:00, 6.01it/s]
bzip2-1.0.8 | 105 KB | : 100% 1.0/1 [00:00<00:00, 23.80it/s]
pillow-8.0.0 | 675 KB | : 100% 1.0/1 [00:00<00:00, 6.51it/s]
pandas-1.1.3 | 10.5 MB | : 100% 1.0/1 [00:02<00:00, 2.11s/it]
Preparing transaction: / - \ | / - \ | / - \ | / done
Verifying transaction: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / done
Executing transaction: \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done
###Markdown
**Installing h5py for keras conversion**
###Code
!pip install h5py
###Output
Requirement already satisfied: h5py in /usr/local/lib/python3.6/site-packages (2.8.0)
Requirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.6/site-packages (from h5py) (1.19.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from h5py) (1.15.0)
###Markdown
**Cloning the repository which contains python script for conversion**
###Code
! git clone https://github.com/Prabhdeep1999/Caffe-Model-to-Keras-.h5-conversion.git
###Output
Cloning into 'Caffe-Model-to-Keras-.h5-conversion'...
remote: Enumerating objects: 8, done.[K
remote: Counting objects: 100% (8/8), done.[K
remote: Compressing objects: 100% (7/7), done.[K
remote: Total 8 (delta 1), reused 4 (delta 1), pack-reused 0[K
Unpacking objects: 100% (8/8), done.
###Markdown
**Finally just run the script by adding your own caffemodel and protxt files and result will be stored in the /content/ directory of Colab runtime with name new_model.h5**
###Code
!python /content/Caffe-Model-to-Keras-.h5-conversion/caffe_weight_converter.py '/content/new_model.h5' \
'add your .protxt file path' \
'add your .caffemodel path' \
--verbose
###Output
Skipped layer 'input' of type 'Input' because it doesn't have any weights
Skipped layer 'data_input_0_split' of type 'Split' because it doesn't have any weights
Converted weights for layer 'conv1_1' of type 'Convolution'
Skipped layer 'relu1_1' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv1_2' of type 'Convolution'
Skipped layer 'relu1_2' of type 'ReLU' because it doesn't have any weights
Skipped layer 'conv1_2_relu1_2_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'pool1' of type 'Pooling' because it doesn't have any weights
Converted weights for layer 'conv2_1' of type 'Convolution'
Skipped layer 'relu2_1' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv2_2' of type 'Convolution'
Skipped layer 'relu2_2' of type 'ReLU' because it doesn't have any weights
Skipped layer 'conv2_2_relu2_2_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'pool2' of type 'Pooling' because it doesn't have any weights
Converted weights for layer 'conv3_1' of type 'Convolution'
Skipped layer 'relu3_1' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv3_2' of type 'Convolution'
Skipped layer 'relu3_2' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv3_3' of type 'Convolution'
Skipped layer 'relu3_3' of type 'ReLU' because it doesn't have any weights
Skipped layer 'conv3_3_relu3_3_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'pool3' of type 'Pooling' because it doesn't have any weights
Converted weights for layer 'conv4_1' of type 'Convolution'
Skipped layer 'relu4_1' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv4_2' of type 'Convolution'
Skipped layer 'relu4_2' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv4_3' of type 'Convolution'
Skipped layer 'relu4_3' of type 'ReLU' because it doesn't have any weights
Skipped layer 'conv4_3_relu4_3_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'pool4' of type 'Pooling' because it doesn't have any weights
Converted weights for layer 'conv5_1' of type 'Convolution'
Skipped layer 'relu5_1' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv5_2' of type 'Convolution'
Skipped layer 'relu5_2' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'conv5_3' of type 'Convolution'
Skipped layer 'relu5_3' of type 'ReLU' because it doesn't have any weights
Converted weights for layer 'score-dsn1' of type 'Convolution'
Skipped layer 'crop' of type 'Crop' because it doesn't have any weights
Skipped layer 'upscore-dsn1_crop_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'sigmoid-dsn1' of type 'Sigmoid' because it doesn't have any weights
Converted weights for layer 'score-dsn2' of type 'Convolution'
Converted weights for layer 'upsample_2' of type 'Deconvolution'
Skipped layer 'crop' of type 'Crop' because it doesn't have any weights
Skipped layer 'upscore-dsn2_crop_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'sigmoid-dsn2' of type 'Sigmoid' because it doesn't have any weights
Converted weights for layer 'score-dsn3' of type 'Convolution'
Converted weights for layer 'upsample_4' of type 'Deconvolution'
Skipped layer 'crop' of type 'Crop' because it doesn't have any weights
Skipped layer 'upscore-dsn3_crop_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'sigmoid-dsn3' of type 'Sigmoid' because it doesn't have any weights
Converted weights for layer 'score-dsn4' of type 'Convolution'
Converted weights for layer 'upsample_8' of type 'Deconvolution'
Skipped layer 'crop' of type 'Crop' because it doesn't have any weights
Skipped layer 'upscore-dsn4_crop_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'sigmoid-dsn4' of type 'Sigmoid' because it doesn't have any weights
Converted weights for layer 'score-dsn5' of type 'Convolution'
Converted weights for layer 'upsample_16' of type 'Deconvolution'
Skipped layer 'crop' of type 'Crop' because it doesn't have any weights
Skipped layer 'upscore-dsn5_crop_0_split' of type 'Split' because it doesn't have any weights
Skipped layer 'sigmoid-dsn5' of type 'Sigmoid' because it doesn't have any weights
Skipped layer 'concat' of type 'Concat' because it doesn't have any weights
Converted weights for layer 'new-score-weighting' of type 'Convolution'
Skipped layer 'sigmoid-fuse' of type 'Sigmoid' because it doesn't have any weights
Weight conversion complete.
23 layers were processed, out of which:
0 were of an unknown layer type
0 did not have any weights
File saved as /content/new_model.h5.h5
|
Colab_notebooks/Beta notebooks/Detectron2_2D_ZeroCostDL4Mic.ipynb | ###Markdown
**This notebook is in beta**Expect some instabilities and bugs.**Currently missing features include:**- Augmentation cannot be disabled- Exported results include only a simple CSV file. More options will be included in the next releases- Training and QC reports are not generated **Detectron2 (2D)** Detectron2 is a deep-learning method designed to perform object detection and classification of objects in images. Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up rewrite of the previous version, Detectron, and it originates from maskrcnn-benchmark. More information on Detectron2 can be found on the Detectron2 github pages (https://github.com/facebookresearch/detectron2).**This particular notebook enables object detection and classification on 2D images given ground truth bounding boxes. If you are interested in image segmentation, you should use our U-net or Stardist notebooks instead.**---*Disclaimer*:This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories. **License**---
###Code
#@markdown ##Double click to see the license information
#------------------------- LICENSE FOR ZeroCostDL4Mic------------------------------------
#This ZeroCostDL4Mic notebook is distributed under the MIT licence
#------------------------- LICENSE FOR CycleGAN ------------------------------------
#Apache License
#Version 2.0, January 2004
#http://www.apache.org/licenses/
#TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
#1. Definitions.
#"License" shall mean the terms and conditions for use, reproduction,
#and distribution as defined by Sections 1 through 9 of this document.
#"Licensor" shall mean the copyright owner or entity authorized by
#the copyright owner that is granting the License.
#"Legal Entity" shall mean the union of the acting entity and all
#other entities that control, are controlled by, or are under common
#control with that entity. For the purposes of this definition,
#"control" means (i) the power, direct or indirect, to cause the
#direction or management of such entity, whether by contract or
#otherwise, or (ii) ownership of fifty percent (50%) or more of the
#outstanding shares, or (iii) beneficial ownership of such entity.
#"You" (or "Your") shall mean an individual or Legal Entity
#exercising permissions granted by this License.
#"Source" form shall mean the preferred form for making modifications,
#including but not limited to software source code, documentation
#source, and configuration files.
#"Object" form shall mean any form resulting from mechanical
#transformation or translation of a Source form, including but
#not limited to compiled object code, generated documentation,
#and conversions to other media types.
#"Work" shall mean the work of authorship, whether in Source or
#Object form, made available under the License, as indicated by a
#copyright notice that is included in or attached to the work
#(an example is provided in the Appendix below).
#"Derivative Works" shall mean any work, whether in Source or Object
#form, that is based on (or derived from) the Work and for which the
#editorial revisions, annotations, elaborations, or other modifications
#represent, as a whole, an original work of authorship. For the purposes
#of this License, Derivative Works shall not include works that remain
#separable from, or merely link (or bind by name) to the interfaces of,
#the Work and Derivative Works thereof.
#"Contribution" shall mean any work of authorship, including
#the original version of the Work and any modifications or additions
#to that Work or Derivative Works thereof, that is intentionally
#submitted to Licensor for inclusion in the Work by the copyright owner
#or by an individual or Legal Entity authorized to submit on behalf of
#the copyright owner. For the purposes of this definition, "submitted"
#means any form of electronic, verbal, or written communication sent
#to the Licensor or its representatives, including but not limited to
#communication on electronic mailing lists, source code control systems,
#and issue tracking systems that are managed by, or on behalf of, the
#Licensor for the purpose of discussing and improving the Work, but
#excluding communication that is conspicuously marked or otherwise
#designated in writing by the copyright owner as "Not a Contribution."
#"Contributor" shall mean Licensor and any individual or Legal Entity
#on behalf of whom a Contribution has been received by Licensor and
#subsequently incorporated within the Work.
#2. Grant of Copyright License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#copyright license to reproduce, prepare Derivative Works of,
#publicly display, publicly perform, sublicense, and distribute the
#Work and such Derivative Works in Source or Object form.
#3. Grant of Patent License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#(except as stated in this section) patent license to make, have made,
#use, offer to sell, sell, import, and otherwise transfer the Work,
#where such license applies only to those patent claims licensable
#by such Contributor that are necessarily infringed by their
#Contribution(s) alone or by combination of their Contribution(s)
#with the Work to which such Contribution(s) was submitted. If You
#institute patent litigation against any entity (including a
#cross-claim or counterclaim in a lawsuit) alleging that the Work
#or a Contribution incorporated within the Work constitutes direct
#or contributory patent infringement, then any patent licenses
#granted to You under this License for that Work shall terminate
#as of the date such litigation is filed.
#4. Redistribution. You may reproduce and distribute copies of the
#Work or Derivative Works thereof in any medium, with or without
#modifications, and in Source or Object form, provided that You
#meet the following conditions:
#(a) You must give any other recipients of the Work or
#Derivative Works a copy of this License; and
#(b) You must cause any modified files to carry prominent notices
#stating that You changed the files; and
#(c) You must retain, in the Source form of any Derivative Works
#that You distribute, all copyright, patent, trademark, and
#attribution notices from the Source form of the Work,
#excluding those notices that do not pertain to any part of
#the Derivative Works; and
#(d) If the Work includes a "NOTICE" text file as part of its
#distribution, then any Derivative Works that You distribute must
#include a readable copy of the attribution notices contained
#within such NOTICE file, excluding those notices that do not
#pertain to any part of the Derivative Works, in at least one
#of the following places: within a NOTICE text file distributed
#as part of the Derivative Works; within the Source form or
#documentation, if provided along with the Derivative Works; or,
#within a display generated by the Derivative Works, if and
#wherever such third-party notices normally appear. The contents
#of the NOTICE file are for informational purposes only and
#do not modify the License. You may add Your own attribution
#notices within Derivative Works that You distribute, alongside
#or as an addendum to the NOTICE text from the Work, provided
#that such additional attribution notices cannot be construed
#as modifying the License.
#You may add Your own copyright statement to Your modifications and
#may provide additional or different license terms and conditions
#for use, reproduction, or distribution of Your modifications, or
#for any such Derivative Works as a whole, provided Your use,
#reproduction, and distribution of the Work otherwise complies with
#the conditions stated in this License.
#5. Submission of Contributions. Unless You explicitly state otherwise,
#any Contribution intentionally submitted for inclusion in the Work
#by You to the Licensor shall be under the terms and conditions of
#this License, without any additional terms or conditions.
#Notwithstanding the above, nothing herein shall supersede or modify
#the terms of any separate license agreement you may have executed
#with Licensor regarding such Contributions.
#6. Trademarks. This License does not grant permission to use the trade
#names, trademarks, service marks, or product names of the Licensor,
#except as required for reasonable and customary use in describing the
#origin of the Work and reproducing the content of the NOTICE file.
#7. Disclaimer of Warranty. Unless required by applicable law or
#agreed to in writing, Licensor provides the Work (and each
#Contributor provides its Contributions) on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
#implied, including, without limitation, any warranties or conditions
#of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
#PARTICULAR PURPOSE. You are solely responsible for determining the
#appropriateness of using or redistributing the Work and assume any
#risks associated with Your exercise of permissions under this License.
#8. Limitation of Liability. In no event and under no legal theory,
#whether in tort (including negligence), contract, or otherwise,
#unless required by applicable law (such as deliberate and grossly
#negligent acts) or agreed to in writing, shall any Contributor be
#liable to You for damages, including any direct, indirect, special,
#incidental, or consequential damages of any character arising as a
#result of this License or out of the use or inability to use the
#Work (including but not limited to damages for loss of goodwill,
#work stoppage, computer failure or malfunction, or any and all
#other commercial damages or losses), even if such Contributor
#has been advised of the possibility of such damages.
#9. Accepting Warranty or Additional Liability. While redistributing
#the Work or Derivative Works thereof, You may choose to offer,
#and charge a fee for, acceptance of support, warranty, indemnity,
#or other liability obligations and/or rights consistent with this
#License. However, in accepting such obligations, You may act only
#on Your own behalf and on Your sole responsibility, not on behalf
#of any other Contributor, and only if You agree to indemnify,
#defend, and hold each Contributor harmless for any liability
#incurred by, or claims asserted against, such Contributor by reason
#of your accepting any such warranty or additional liability.
#END OF TERMS AND CONDITIONS
#APPENDIX: How to apply the Apache License to your work.
#To apply the Apache License to your work, attach the following
#boilerplate notice, with the fields enclosed by brackets "[]"
#replaced with your own identifying information. (Don't include
#the brackets!) The text should be enclosed in the appropriate
#comment syntax for the file format. We also recommend that a
#file or class name and description of purpose be included on the
#same "printed page" as the copyright notice for easier
#identification within third-party archives.
#Copyright [yyyy] [name of copyright owner]
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
###Output
_____no_output_____
###Markdown
**How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- Preparing the dataset carefully is essential to make this Detectron2 notebook work. This model requires as input a set of images and as target a list of annotation files in Pascal VOC format. The annotation files should have the exact same name as the input files, except with an .xml instead of the .jpg extension. The annotation files contain the class labels and all bounding boxes for the objects for each image in your dataset. Most datasets will give the option of saving the annotations in this format or using software for hand-annotations will automatically save the annotations in this format. If you want to assemble your own dataset we recommend using the open source https://www.makesense.ai/ resource. You can follow our instructions on how to label your dataset with this tool on our [wiki](https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki/Object-Detection-(YOLOv2)).**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .png files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Input images (Training_source) - img_1.png, img_2.png, ... - High SNR images (Training_source_annotations) - img_1.xml, img_2.xml, ... - **Quality control dataset** - Input images - img_1.png, img_2.png - High SNR images - img_1.xml, img_2.xml - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Initialise the Colab session**--- **1.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
#%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Play the cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
# mount user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install Detectron2 and dependencies**--- **2.1. Install key dependencies**---
###Code
#@markdown ##Install dependencies and Detectron2
# install dependencies
#!pip install -U torch torchvision cython
!pip install -U 'git+https://github.com/facebookresearch/fvcore.git' 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
torch.__version__
!git clone https://github.com/facebookresearch/detectron2 detectron2_repo
!pip install -e detectron2_repo
!pip install wget
#Force session restart
exit(0)
###Output
_____no_output_____
###Markdown
**2.2. Restart your runtime**--- ** Your Runtime has automatically restarted. This is normal.** **2.3. Load key dependencies**---
###Code
Notebook_version = ['1.12']
#@markdown ##Play this cell to load the required dependencies
import wget
# Some basic setup:
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
import yaml
#Download the script to convert XML into COCO
wget.download("https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Tools/voc2coco.py", "/content")
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.utils.visualizer import ColorMode
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from datetime import datetime
from detectron2.data.catalog import Metadata
from detectron2.config import get_cfg
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.engine import DefaultTrainer
from detectron2.data.datasets import register_coco_instances
from detectron2.utils.visualizer import ColorMode
import glob
from detectron2.checkpoint import Checkpointer
from detectron2.config import get_cfg
import os
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator
class CocoTrainer(DefaultTrainer):
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
os.makedirs("coco_eval", exist_ok=True)
output_folder = "coco_eval"
return COCOEvaluator(dataset_name, cfg, False, output_folder)
print("Librairies loaded")
# Check if this is the latest version of the notebook
Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
if Notebook_version == list(Latest_notebook_version.columns):
print("This notebook is up-to-date.")
if not Notebook_version == list(Latest_notebook_version.columns):
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
#Failsafes
cell_ran_prediction = 0
cell_ran_training = 0
cell_ran_QC_training_dataset = 0
cell_ran_QC_QC_dataset = 0
###Output
_____no_output_____
###Markdown
**3. Select your parameters and paths** **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source and the annotation data respectively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`labels`:** Input the name of the differentes labels used to annotate your dataset (separated by a comma).**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_iteration`:** Input how many iterations to use to train the network. Initial results can be observed using 1000 iterations but consider using 5000 or more iterations to train your models. **Default value: 2000** **Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Noise2Void requires a large batch size for stable training. Reduce this parameter if your GPU runs out of memory. **Default value: 128****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10****`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0001**
###Code
# create DataGenerator-object.
#@markdown ###Path to training image(s):
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#@markdown ###Labels
#@markdown Input the name of the differentes labels present in your training dataset separated by a comma
labels = "" #@param {type:"string"}
#@markdown ### Model name and path:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
full_model_path = model_path+'/'+model_name+'/'
#@markdown ###Training Parameters
#@markdown Number of iterations:
number_of_iteration = 2000#@param {type:"number"}
#Here we store the informations related to our labels
list_of_labels = labels.split(", ")
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels:
print(item, file=f)
number_of_labels = len(list_of_labels)
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True#@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 4#@param {type:"number"}
percentage_validation = 10#@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 4
percentage_validation = 10
initial_learning_rate = 0.001
# Here we disable pre-trained model by default (in case the next cell is not ran)
Use_pretrained_model = True
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = True
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
# Here we convert the XML files into COCO format to be loaded in detectron2
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_training == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#visualize training data
my_dataset_train_metadata = MetadataCatalog.get("my_dataset_train")
dataset_dicts = DatasetCatalog.get("my_dataset_train")
import random
from detectron2.utils.visualizer import Visualizer
for d in random.sample(dataset_dicts, 1):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=my_dataset_train_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.8)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
# failsafe
cell_ran_training = 1
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation** --- Data augmentation is currently enabled by default in this notebook. The option to disable data augmentation is not yet avaialble. **3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Detectron2 model**.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = True #@param {type:"boolean"}
pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN","RetinaNet", "Model_from_file"]
#pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN", "RetinaNet", "RPN & Fast R-CNN", "Model_from_file"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = pretrained_model_path
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
if not os.path.exists(h5_file_path) and Use_pretrained_model:
print('WARNING pretrained model does not exist')
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "Faster R-CNN":
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "RetinaNet":
h5_file_path = "COCO-Detection/retinanet_R_101_FPN_3x.yaml"
print('The RetinaNet model will be used.')
if pretrained_model_choice == "RPN & Fast R-CNN":
h5_file_path = "COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml"
if not Use_pretrained_model:
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Start Trainning**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this point.
###Code
#@markdown ##Start training
# Create the model folder
if os.path.exists(full_model_path):
shutil.rmtree(full_model_path)
os.makedirs(full_model_path)
#Copy the label names in the model folder
shutil.copy("/content/labels.txt", full_model_path+"/"+"labels.txt")
#PDF export
#######################################
## MISSING
#######################################
#To be added
start = time.time()
#Load the config files
cfg = get_cfg()
if pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(pretrained_model_path+"/config.yaml")
if not pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(model_zoo.get_config_file(h5_file_path))
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
cfg.OUTPUT_DIR= (full_model_path)
cfg.DATALOADER.NUM_WORKERS = 4
if pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = pretrained_model_path+"/model_final.pth" # Let training initialize from model zoo
if not pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(h5_file_path) # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = int(batch_size)
cfg.SOLVER.BASE_LR = initial_learning_rate
cfg.SOLVER.WARMUP_ITERS = 1000
cfg.SOLVER.MAX_ITER = int(number_of_iteration) #adjust up if val mAP is still rising, adjust down if overfit
cfg.SOLVER.STEPS = (1000, 1500)
cfg.SOLVER.GAMMA = 0.05
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
if pretrained_model_choice == "Faster R-CNN":
cfg.MODEL.ROI_HEADS.NUM_CLASSES = (number_of_labels)
if pretrained_model_choice == "RetinaNet":
cfg.MODEL.RETINANET.NUM_CLASSES = (number_of_labels)
cfg.TEST.EVAL_PERIOD = 500
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
#Save the config file after trainning
config= cfg.dump() # print formatted configs
file1 = open(full_model_path+"/config.yaml", 'w')
file1.writelines(config)
file1.close() #to change file access modes
#Save the label file after trainning
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
###Output
_____no_output_____
###Markdown
**4.2. Download your model(s) from Google Drive**---Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder. **5. Evaluate your model**---This section allows the user to perform important quality checks on the validity and generalisability of the trained model. Detectron 2 requires you to reload your training dataset in order to perform the quality control step.**We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
QC_model_folder = "" #@param {type:"string"}
#@markdown ####Path to the image(s) used for training:
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
print(bcolors.WARNING + '!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
# Here we load the list of classes stored in the model folder
list_of_labels_QC =[]
with open(full_QC_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_QC.append(row[0])
#Here we create a list of color for later display
color_list = []
for i in range(len(list_of_labels_QC)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Save the list of labels as text file
if not (Use_the_current_trained_model):
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
percentage_validation= 10
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_QC_training_dataset == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#Failsafe for later
cell_ran_QC_training_dataset = 1
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---It is good practice to evaluate the training progress by studying if your model is slowly improving over time. The following cell will allow you to load Tensorboard and investigate how several metric evolved over time (iterations).
###Code
#@markdown ##Play the cell to load tensorboard
%load_ext tensorboard
%tensorboard --logdir "$full_QC_model_path"
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will compare the predictions generated by your model against ground-truth. Additionally, the below cell will show the mAP value of the model on the QC data If you want to read in more detail about this score, we recommend [this brief explanation](https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173). The images provided in the "Source_QC_folder" and "Target_QC_folder" should contain images (e.g. as .png) and annotations (.xml files)!**mAP score:** This refers to the mean average precision of the model on the given dataset. This value gives an indication how precise the predictions of the classes on this dataset are when compared to the ground-truth. Values closer to 1 indicate a good fit.
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
if cell_ran_QC_QC_dataset == 0:
#Save the list of labels as text file
with open('/content/labels_QC.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
#Here we create temp folder for the QC
QC_source_temp = "/content/QC_source"
if os.path.exists(QC_source_temp):
shutil.rmtree(QC_source_temp)
os.makedirs(QC_source_temp)
QC_target_temp = "/content/QC_target"
if os.path.exists(QC_target_temp):
shutil.rmtree(QC_target_temp)
os.makedirs(QC_target_temp)
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
#Here we move the QC files to the temp
for f in os.listdir(os.path.join(Source_QC_folder)):
shutil.copy(Source_QC_folder+"/"+f, QC_source_temp+"/"+f)
for p in os.listdir(os.path.join(Target_QC_folder)):
shutil.copy(Target_QC_folder+"/"+p, QC_target_temp+"/"+p)
#Here we convert the XML files into JSON
#Save the list of files
list_source_QC_temp = os.listdir(os.path.join(QC_source_temp))
name_no_extension_QC = []
for n in list_source_QC_temp:
name_no_extension_QC.append(os.path.splitext(n)[0])
with open('/content/QC_files.txt', 'w') as f:
for item in name_no_extension_QC:
print(item, end='\n', file=f)
#Convert XML into JSON
file_output_QC = QC_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$QC_target_temp" --output "$file_output_QC" --ann_ids "/content/QC_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we register the QC dataset
register_coco_instances("my_dataset_QC", {}, QC_target_temp+"/output.json", QC_source_temp)
cell_ran_QC_QC_dataset = 1
#Load the model to use
cfg = get_cfg()
cfg.merge_from_file(full_QC_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_QC_model_path, "model_final.pth")
cfg.DATASETS.TEST = ("my_dataset_QC", )
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
#Metadata
test_metadata = MetadataCatalog.get("my_dataset_QC")
test_metadata.set(thing_color = color_list)
# For the evaluation we need to load the trainer
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=True)
# Here we need to load the predictor
predictor = DefaultPredictor(cfg)
evaluator = COCOEvaluator("my_dataset_QC", cfg, False, output_dir=QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
val_loader = build_detection_test_loader(cfg, "my_dataset_QC")
inference_on_dataset(trainer.model, val_loader, evaluator)
print("A prediction is displayed")
dataset_QC_dicts = DatasetCatalog.get("my_dataset_QC")
for d in random.sample(dataset_QC_dicts, 1):
print("Ground Truth")
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=test_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.5)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
print("A prediction is displayed")
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=test_metadata,
instance_mode=ColorMode.SEGMENTATION,
scale=0.5
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
cell_ran_QC_QC_dataset = 1
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved, then play the cell to predict output on your unseen images.
#@markdown ###Path to data to analyse and where predicted output should be saved:
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
print(bcolors.WARNING +'!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Here we will load the label file
list_of_labels_predictions =[]
with open(full_Prediction_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_predictions.append(row[0])
#Here we create a list of color
color_list = []
for i in range(len(list_of_labels_predictions)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Activate the pretrained model.
# Create config
cfg = get_cfg()
cfg.merge_from_file(full_Prediction_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_Prediction_model_path, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Create predictor
predictor = DefaultPredictor(cfg)
#Load the metadata from the prediction file
prediction_metadata = Metadata()
prediction_metadata.set(thing_classes = list_of_labels_predictions)
prediction_metadata.set(thing_color = color_list)
start = datetime.now()
validation_folder = Path(Data_folder)
for i, file in enumerate(validation_folder.glob("*.png")):
# this loop opens the .png files from the val-folder, creates a dict with the file
# information, plots visualizations and saves the result as .pkl files.
file = str(file)
file_name = file.split("/")[-1]
im = cv2.imread(file)
#Prediction are done here
outputs = predictor(im)
#here we extract the results into numpy arrays
Classes_predictions = outputs["instances"].pred_classes.cpu().data.numpy()
boxes_predictions = outputs["instances"].pred_boxes.tensor.cpu().numpy()
Score_predictions = outputs["instances"].scores.cpu().data.numpy()
#here we save the results into a csv file
prediction_csv = Result_folder+"/"+file_name+"_predictions.csv"
with open(prediction_csv, 'w') as f:
writer = csv.writer(f)
writer.writerow(['x1','y1','x2','y2','box width','box height', 'class', 'score' ])
for i in range(len(boxes_predictions)):
x1 = boxes_predictions[i][0]
y1 = boxes_predictions[i][1]
x2 = boxes_predictions[i][2]
y2 = boxes_predictions[i][3]
box_width = x2 - x1
box_height = y2 -y1
writer.writerow([str(x1), str(y1), str(x2), str(y2), str(box_width), str(box_height), str(list_of_labels_predictions[Classes_predictions[i]]), Score_predictions[i]])
# The last example is displayed
v = Visualizer(im, metadata=prediction_metadata, instance_mode=ColorMode.SEGMENTATION, scale=1)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.figure(figsize=(20,20))
plt.imshow(v.get_image()[:, :, ::-1])
plt.axis('off');
plt.savefig(Result_folder+"/"+file_name)
print("Time needed for inferencing:", datetime.now() - start)
###Output
_____no_output_____
###Markdown
**This notebook is in beta**Expect some instabilities and bugs.**Currently missing features include:**- Augmentation cannot be disabled- Exported results include only a simple CSV file. More options will be included in the next releases- Training and QC reports are not generated **Detectron2 (2D)** Detectron2 is a deep-learning method designed to perform object detection and classification of objects in images. Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up rewrite of the previous version, Detectron, and it originates from maskrcnn-benchmark. More information on Detectron2 can be found on the Detectron2 github pages (https://github.com/facebookresearch/detectron2).**This particular notebook enables object detection and classification on 2D images given ground truth bounding boxes. If you are interested in image segmentation, you should use our U-net or Stardist notebooks instead.**---*Disclaimer*:This notebook is part of the Zero-Cost Deep-Learning to Enhance Microscopy project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories. **License**---
###Code
#@markdown ##Double click to see the license information
#------------------------- LICENSE FOR ZeroCostDL4Mic------------------------------------
#This ZeroCostDL4Mic notebook is distributed under the MIT licence
#------------------------- LICENSE FOR CycleGAN ------------------------------------
#Apache License
#Version 2.0, January 2004
#http://www.apache.org/licenses/
#TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
#1. Definitions.
#"License" shall mean the terms and conditions for use, reproduction,
#and distribution as defined by Sections 1 through 9 of this document.
#"Licensor" shall mean the copyright owner or entity authorized by
#the copyright owner that is granting the License.
#"Legal Entity" shall mean the union of the acting entity and all
#other entities that control, are controlled by, or are under common
#control with that entity. For the purposes of this definition,
#"control" means (i) the power, direct or indirect, to cause the
#direction or management of such entity, whether by contract or
#otherwise, or (ii) ownership of fifty percent (50%) or more of the
#outstanding shares, or (iii) beneficial ownership of such entity.
#"You" (or "Your") shall mean an individual or Legal Entity
#exercising permissions granted by this License.
#"Source" form shall mean the preferred form for making modifications,
#including but not limited to software source code, documentation
#source, and configuration files.
#"Object" form shall mean any form resulting from mechanical
#transformation or translation of a Source form, including but
#not limited to compiled object code, generated documentation,
#and conversions to other media types.
#"Work" shall mean the work of authorship, whether in Source or
#Object form, made available under the License, as indicated by a
#copyright notice that is included in or attached to the work
#(an example is provided in the Appendix below).
#"Derivative Works" shall mean any work, whether in Source or Object
#form, that is based on (or derived from) the Work and for which the
#editorial revisions, annotations, elaborations, or other modifications
#represent, as a whole, an original work of authorship. For the purposes
#of this License, Derivative Works shall not include works that remain
#separable from, or merely link (or bind by name) to the interfaces of,
#the Work and Derivative Works thereof.
#"Contribution" shall mean any work of authorship, including
#the original version of the Work and any modifications or additions
#to that Work or Derivative Works thereof, that is intentionally
#submitted to Licensor for inclusion in the Work by the copyright owner
#or by an individual or Legal Entity authorized to submit on behalf of
#the copyright owner. For the purposes of this definition, "submitted"
#means any form of electronic, verbal, or written communication sent
#to the Licensor or its representatives, including but not limited to
#communication on electronic mailing lists, source code control systems,
#and issue tracking systems that are managed by, or on behalf of, the
#Licensor for the purpose of discussing and improving the Work, but
#excluding communication that is conspicuously marked or otherwise
#designated in writing by the copyright owner as "Not a Contribution."
#"Contributor" shall mean Licensor and any individual or Legal Entity
#on behalf of whom a Contribution has been received by Licensor and
#subsequently incorporated within the Work.
#2. Grant of Copyright License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#copyright license to reproduce, prepare Derivative Works of,
#publicly display, publicly perform, sublicense, and distribute the
#Work and such Derivative Works in Source or Object form.
#3. Grant of Patent License. Subject to the terms and conditions of
#this License, each Contributor hereby grants to You a perpetual,
#worldwide, non-exclusive, no-charge, royalty-free, irrevocable
#(except as stated in this section) patent license to make, have made,
#use, offer to sell, sell, import, and otherwise transfer the Work,
#where such license applies only to those patent claims licensable
#by such Contributor that are necessarily infringed by their
#Contribution(s) alone or by combination of their Contribution(s)
#with the Work to which such Contribution(s) was submitted. If You
#institute patent litigation against any entity (including a
#cross-claim or counterclaim in a lawsuit) alleging that the Work
#or a Contribution incorporated within the Work constitutes direct
#or contributory patent infringement, then any patent licenses
#granted to You under this License for that Work shall terminate
#as of the date such litigation is filed.
#4. Redistribution. You may reproduce and distribute copies of the
#Work or Derivative Works thereof in any medium, with or without
#modifications, and in Source or Object form, provided that You
#meet the following conditions:
#(a) You must give any other recipients of the Work or
#Derivative Works a copy of this License; and
#(b) You must cause any modified files to carry prominent notices
#stating that You changed the files; and
#(c) You must retain, in the Source form of any Derivative Works
#that You distribute, all copyright, patent, trademark, and
#attribution notices from the Source form of the Work,
#excluding those notices that do not pertain to any part of
#the Derivative Works; and
#(d) If the Work includes a "NOTICE" text file as part of its
#distribution, then any Derivative Works that You distribute must
#include a readable copy of the attribution notices contained
#within such NOTICE file, excluding those notices that do not
#pertain to any part of the Derivative Works, in at least one
#of the following places: within a NOTICE text file distributed
#as part of the Derivative Works; within the Source form or
#documentation, if provided along with the Derivative Works; or,
#within a display generated by the Derivative Works, if and
#wherever such third-party notices normally appear. The contents
#of the NOTICE file are for informational purposes only and
#do not modify the License. You may add Your own attribution
#notices within Derivative Works that You distribute, alongside
#or as an addendum to the NOTICE text from the Work, provided
#that such additional attribution notices cannot be construed
#as modifying the License.
#You may add Your own copyright statement to Your modifications and
#may provide additional or different license terms and conditions
#for use, reproduction, or distribution of Your modifications, or
#for any such Derivative Works as a whole, provided Your use,
#reproduction, and distribution of the Work otherwise complies with
#the conditions stated in this License.
#5. Submission of Contributions. Unless You explicitly state otherwise,
#any Contribution intentionally submitted for inclusion in the Work
#by You to the Licensor shall be under the terms and conditions of
#this License, without any additional terms or conditions.
#Notwithstanding the above, nothing herein shall supersede or modify
#the terms of any separate license agreement you may have executed
#with Licensor regarding such Contributions.
#6. Trademarks. This License does not grant permission to use the trade
#names, trademarks, service marks, or product names of the Licensor,
#except as required for reasonable and customary use in describing the
#origin of the Work and reproducing the content of the NOTICE file.
#7. Disclaimer of Warranty. Unless required by applicable law or
#agreed to in writing, Licensor provides the Work (and each
#Contributor provides its Contributions) on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
#implied, including, without limitation, any warranties or conditions
#of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
#PARTICULAR PURPOSE. You are solely responsible for determining the
#appropriateness of using or redistributing the Work and assume any
#risks associated with Your exercise of permissions under this License.
#8. Limitation of Liability. In no event and under no legal theory,
#whether in tort (including negligence), contract, or otherwise,
#unless required by applicable law (such as deliberate and grossly
#negligent acts) or agreed to in writing, shall any Contributor be
#liable to You for damages, including any direct, indirect, special,
#incidental, or consequential damages of any character arising as a
#result of this License or out of the use or inability to use the
#Work (including but not limited to damages for loss of goodwill,
#work stoppage, computer failure or malfunction, or any and all
#other commercial damages or losses), even if such Contributor
#has been advised of the possibility of such damages.
#9. Accepting Warranty or Additional Liability. While redistributing
#the Work or Derivative Works thereof, You may choose to offer,
#and charge a fee for, acceptance of support, warranty, indemnity,
#or other liability obligations and/or rights consistent with this
#License. However, in accepting such obligations, You may act only
#on Your own behalf and on Your sole responsibility, not on behalf
#of any other Contributor, and only if You agree to indemnify,
#defend, and hold each Contributor harmless for any liability
#incurred by, or claims asserted against, such Contributor by reason
#of your accepting any such warranty or additional liability.
#END OF TERMS AND CONDITIONS
#APPENDIX: How to apply the Apache License to your work.
#To apply the Apache License to your work, attach the following
#boilerplate notice, with the fields enclosed by brackets "[]"
#replaced with your own identifying information. (Don't include
#the brackets!) The text should be enclosed in the appropriate
#comment syntax for the file format. We also recommend that a
#file or class name and description of purpose be included on the
#same "printed page" as the copyright notice for easier
#identification within third-party archives.
#Copyright [yyyy] [name of copyright owner]
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
###Output
_____no_output_____
###Markdown
**How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- Preparing the dataset carefully is essential to make this Detectron2 notebook work. This model requires as input a set of images and as target a list of annotation files in Pascal VOC format. The annotation files should have the exact same name as the input files, except with an .xml instead of the .jpg extension. The annotation files contain the class labels and all bounding boxes for the objects for each image in your dataset. Most datasets will give the option of saving the annotations in this format or using software for hand-annotations will automatically save the annotations in this format. If you want to assemble your own dataset we recommend using the open source https://www.makesense.ai/ resource. You can follow our instructions on how to label your dataset with this tool on our [wiki](https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki/Object-Detection-(YOLOv2)).**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .png files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Input images (Training_source) - img_1.png, img_2.png, ... - High SNR images (Training_source_annotations) - img_1.xml, img_2.xml, ... - **Quality control dataset** - Input images - img_1.png, img_2.png - High SNR images - img_1.xml, img_2.xml - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Install Detectron2 and dependencies**--- **1.1. Install key dependencies**---
###Code
#@markdown ##Install dependencies and Detectron2
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
# install dependencies
#!pip install -U torch torchvision cython
!pip install -U 'git+https://github.com/facebookresearch/fvcore.git' 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
import os
import pandas as pd
torch.__version__
!git clone https://github.com/facebookresearch/detectron2 detectron2_repo
!pip install -e detectron2_repo
!pip install wget
#Force session restart
exit(0)
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
###Output
_____no_output_____
###Markdown
**1.2. Restart your runtime**---** Ignore the following message error message. Your Runtime has automatically restarted. This is normal.** **1.3. Load key dependencies**---
###Code
Notebook_version = '1.13'
Network = 'Detectron 2D'
#@markdown ##Play this cell to load the required dependencies
import wget
# Some basic setup:
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
import yaml
#Download the script to convert XML into COCO
wget.download("https://github.com/HenriquesLab/ZeroCostDL4Mic/raw/master/Tools/voc2coco.py", "/content")
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.utils.visualizer import ColorMode
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from datetime import datetime
from detectron2.data.catalog import Metadata
from detectron2.config import get_cfg
from detectron2.data import DatasetCatalog, MetadataCatalog, build_detection_test_loader
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.engine import DefaultTrainer
from detectron2.data.datasets import register_coco_instances
from detectron2.utils.visualizer import ColorMode
import glob
from detectron2.checkpoint import Checkpointer
from detectron2.config import get_cfg
import os
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator
class CocoTrainer(DefaultTrainer):
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
os.makedirs("coco_eval", exist_ok=True)
output_folder = "coco_eval"
return COCOEvaluator(dataset_name, cfg, False, output_folder)
print("Librairies loaded")
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
#Failsafes
cell_ran_prediction = 0
cell_ran_training = 0
cell_ran_QC_training_dataset = 0
cell_ran_QC_QC_dataset = 0
###Output
_____no_output_____
###Markdown
**2. Initialise the Colab session**--- **2.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelerator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
#%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**2.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Play the cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
# mount user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
** If you cannot see your files, reactivate your session by connecting to your hosted runtime.** Connect to a hosted runtime. **3. Select your parameters and paths** **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source and the annotation data respectively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`labels`:** Input the name of the differentes labels used to annotate your dataset (separated by a comma).**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_iteration`:** Input how many iterations to use to train the network. Initial results can be observed using 1000 iterations but consider using 5000 or more iterations to train your models. **Default value: 2000** **Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Noise2Void requires a large batch size for stable training. Reduce this parameter if your GPU runs out of memory. **Default value: 128****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each image / patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during the training. **Default value: 10****`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0001**
###Code
# create DataGenerator-object.
#@markdown ###Path to training image(s):
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#@markdown ###Labels
#@markdown Input the name of the differentes labels present in your training dataset separated by a comma
labels = "" #@param {type:"string"}
#@markdown ### Model name and path:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
full_model_path = model_path+'/'+model_name+'/'
#@markdown ###Training Parameters
#@markdown Number of iterations:
number_of_iteration = 2000#@param {type:"number"}
#Here we store the informations related to our labels
list_of_labels = labels.split(", ")
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels:
print(item, file=f)
number_of_labels = len(list_of_labels)
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True#@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 4#@param {type:"number"}
percentage_validation = 10#@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 4
percentage_validation = 10
initial_learning_rate = 0.001
# Here we disable pre-trained model by default (in case the next cell is not ran)
Use_pretrained_model = True
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = True
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
# Here we convert the XML files into COCO format to be loaded in detectron2
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_training == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#visualize training data
my_dataset_train_metadata = MetadataCatalog.get("my_dataset_train")
dataset_dicts = DatasetCatalog.get("my_dataset_train")
import random
from detectron2.utils.visualizer import Visualizer
for d in random.sample(dataset_dicts, 1):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=my_dataset_train_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.8)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
# failsafe
cell_ran_training = 1
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation** --- Data augmentation is currently enabled by default in this notebook. The option to disable data augmentation is not yet avaialble. **3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Detectron2 model**.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = True #@param {type:"boolean"}
pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN","RetinaNet", "Model_from_file"]
#pretrained_model_choice = "Faster R-CNN" #@param ["Faster R-CNN", "RetinaNet", "RPN & Fast R-CNN", "Model_from_file"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = pretrained_model_path
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
if not os.path.exists(h5_file_path) and Use_pretrained_model:
print('WARNING pretrained model does not exist')
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "Faster R-CNN":
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
if pretrained_model_choice == "RetinaNet":
h5_file_path = "COCO-Detection/retinanet_R_101_FPN_3x.yaml"
print('The RetinaNet model will be used.')
if pretrained_model_choice == "RPN & Fast R-CNN":
h5_file_path = "COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml"
if not Use_pretrained_model:
h5_file_path = "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"
print('The Faster R-CNN model will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Start Trainning**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches. Another way circumvent this is to save the parameters of the model after training and start training again from this point.
###Code
#@markdown ##Start training
# Create the model folder
if os.path.exists(full_model_path):
shutil.rmtree(full_model_path)
os.makedirs(full_model_path)
#Copy the label names in the model folder
shutil.copy("/content/labels.txt", full_model_path+"/"+"labels.txt")
#PDF export
#######################################
## MISSING
#######################################
#To be added
start = time.time()
#Load the config files
cfg = get_cfg()
if pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(pretrained_model_path+"/config.yaml")
if not pretrained_model_choice == "Model_from_file":
cfg.merge_from_file(model_zoo.get_config_file(h5_file_path))
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
cfg.OUTPUT_DIR= (full_model_path)
cfg.DATALOADER.NUM_WORKERS = 4
if pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = pretrained_model_path+"/model_final.pth" # Let training initialize from model zoo
if not pretrained_model_choice == "Model_from_file":
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(h5_file_path) # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = int(batch_size)
cfg.SOLVER.BASE_LR = initial_learning_rate
cfg.SOLVER.WARMUP_ITERS = 1000
cfg.SOLVER.MAX_ITER = int(number_of_iteration) #adjust up if val mAP is still rising, adjust down if overfit
cfg.SOLVER.STEPS = (1000, 1500)
cfg.SOLVER.GAMMA = 0.05
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512
if pretrained_model_choice == "Faster R-CNN":
cfg.MODEL.ROI_HEADS.NUM_CLASSES = (number_of_labels)
if pretrained_model_choice == "RetinaNet":
cfg.MODEL.RETINANET.NUM_CLASSES = (number_of_labels)
cfg.TEST.EVAL_PERIOD = 500
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
#Save the config file after trainning
config= cfg.dump() # print formatted configs
file1 = open(full_model_path+"/config.yaml", 'w')
file1.writelines(config)
file1.close() #to change file access modes
#Save the label file after trainning
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
###Output
_____no_output_____
###Markdown
**4.2. Download your model(s) from Google Drive**---Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder. **5. Evaluate your model**---This section allows the user to perform important quality checks on the validity and generalisability of the trained model. Detectron 2 requires you to reload your training dataset in order to perform the quality control step.**We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
QC_model_folder = "" #@param {type:"string"}
#@markdown ####Path to the image(s) used for training:
Training_source = "" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
print(bcolors.WARNING + '!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
# Here we load the list of classes stored in the model folder
list_of_labels_QC =[]
with open(full_QC_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_QC.append(row[0])
#Here we create a list of color for later display
color_list = []
for i in range(len(list_of_labels_QC)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Save the list of labels as text file
if not (Use_the_current_trained_model):
with open('/content/labels.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
# Here we split the data between training and validation
# Here we count the number of files in the training target folder
Filelist = os.listdir(Training_target)
number_files = len(Filelist)
percentage_validation= 10
File_for_validation = int((number_files)/percentage_validation)+1
#Here we split the training dataset between training and validation
# Everything is copied in the /Content Folder
Training_source_temp = "/content/training_source"
if os.path.exists(Training_source_temp):
shutil.rmtree(Training_source_temp)
os.makedirs(Training_source_temp)
Training_target_temp = "/content/training_target"
if os.path.exists(Training_target_temp):
shutil.rmtree(Training_target_temp)
os.makedirs(Training_target_temp)
Validation_source_temp = "/content/validation_source"
if os.path.exists(Validation_source_temp):
shutil.rmtree(Validation_source_temp)
os.makedirs(Validation_source_temp)
Validation_target_temp = "/content/validation_target"
if os.path.exists(Validation_target_temp):
shutil.rmtree(Validation_target_temp)
os.makedirs(Validation_target_temp)
list_source = os.listdir(os.path.join(Training_source))
list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
for f in os.listdir(os.path.join(Training_source)):
shutil.copy(Training_source+"/"+f, Training_source_temp+"/"+f)
for p in os.listdir(os.path.join(Training_target)):
shutil.copy(Training_target+"/"+p, Training_target_temp+"/"+p)
list_source_temp = os.listdir(os.path.join(Training_source_temp))
list_target_temp = os.listdir(os.path.join(Training_target_temp))
#Here we move images to be used for validation
for i in range(File_for_validation):
name = list_source_temp[i]
shutil.move(Training_source_temp+"/"+name, Validation_source_temp+"/"+name)
shortname_no_extension = name[:-4]
shutil.move(Training_target_temp+"/"+shortname_no_extension+".xml", Validation_target_temp+"/"+shortname_no_extension+".xml")
#First we need to create list of labels to generate the json dictionaries
list_source_training_temp = os.listdir(os.path.join(Training_source_temp))
list_source_validation_temp = os.listdir(os.path.join(Validation_source_temp))
name_no_extension_training = []
for n in list_source_training_temp:
name_no_extension_training.append(os.path.splitext(n)[0])
name_no_extension_validation = []
for n in list_source_validation_temp:
name_no_extension_validation.append(os.path.splitext(n)[0])
#Save the list of labels as text file
with open('/content/training_files.txt', 'w') as f:
for item in name_no_extension_training:
print(item, end='\n', file=f)
with open('/content/validation_files.txt', 'w') as f:
for item in name_no_extension_validation:
print(item, end='\n', file=f)
file_output_training = Training_target_temp+"/output.json"
file_output_validation = Validation_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$Training_target_temp" --output "$file_output_training" --ann_ids "/content/training_files.txt" --labels "/content/labels.txt" --ext xml
!python voc2coco.py --ann_dir "$Validation_target_temp" --output "$file_output_validation" --ann_ids "/content/validation_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we load the dataset to detectron2
if cell_ran_QC_training_dataset == 0:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("my_dataset_train", {}, Training_target_temp+"/output.json", Training_source_temp)
register_coco_instances("my_dataset_val", {}, Validation_target_temp+"/output.json", Validation_source_temp)
#Failsafe for later
cell_ran_QC_training_dataset = 1
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---It is good practice to evaluate the training progress by studying if your model is slowly improving over time. The following cell will allow you to load Tensorboard and investigate how several metric evolved over time (iterations).
###Code
#@markdown ##Play the cell to load tensorboard
%load_ext tensorboard
%tensorboard --logdir "$full_QC_model_path"
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will compare the predictions generated by your model against ground-truth. Additionally, the below cell will show the mAP value of the model on the QC data If you want to read in more detail about this score, we recommend [this brief explanation](https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173). The images provided in the "Source_QC_folder" and "Target_QC_folder" should contain images (e.g. as .png) and annotations (.xml files)!**mAP score:** This refers to the mean average precision of the model on the given dataset. This value gives an indication how precise the predictions of the classes on this dataset are when compared to the ground-truth. Values closer to 1 indicate a good fit.
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
if cell_ran_QC_QC_dataset == 0:
#Save the list of labels as text file
with open('/content/labels_QC.txt', 'w') as f:
for item in list_of_labels_QC:
print(item, file=f)
#Here we create temp folder for the QC
QC_source_temp = "/content/QC_source"
if os.path.exists(QC_source_temp):
shutil.rmtree(QC_source_temp)
os.makedirs(QC_source_temp)
QC_target_temp = "/content/QC_target"
if os.path.exists(QC_target_temp):
shutil.rmtree(QC_target_temp)
os.makedirs(QC_target_temp)
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
#Here we move the QC files to the temp
for f in os.listdir(os.path.join(Source_QC_folder)):
shutil.copy(Source_QC_folder+"/"+f, QC_source_temp+"/"+f)
for p in os.listdir(os.path.join(Target_QC_folder)):
shutil.copy(Target_QC_folder+"/"+p, QC_target_temp+"/"+p)
#Here we convert the XML files into JSON
#Save the list of files
list_source_QC_temp = os.listdir(os.path.join(QC_source_temp))
name_no_extension_QC = []
for n in list_source_QC_temp:
name_no_extension_QC.append(os.path.splitext(n)[0])
with open('/content/QC_files.txt', 'w') as f:
for item in name_no_extension_QC:
print(item, end='\n', file=f)
#Convert XML into JSON
file_output_QC = QC_target_temp+"/output.json"
os.chdir("/content")
!python voc2coco.py --ann_dir "$QC_target_temp" --output "$file_output_QC" --ann_ids "/content/QC_files.txt" --labels "/content/labels.txt" --ext xml
os.chdir("/")
#Here we register the QC dataset
register_coco_instances("my_dataset_QC", {}, QC_target_temp+"/output.json", QC_source_temp)
cell_ran_QC_QC_dataset = 1
#Load the model to use
cfg = get_cfg()
cfg.merge_from_file(full_QC_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_QC_model_path, "model_final.pth")
cfg.DATASETS.TEST = ("my_dataset_QC", )
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
#Metadata
test_metadata = MetadataCatalog.get("my_dataset_QC")
test_metadata.set(thing_color = color_list)
# For the evaluation we need to load the trainer
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=True)
# Here we need to load the predictor
predictor = DefaultPredictor(cfg)
evaluator = COCOEvaluator("my_dataset_QC", cfg, False, output_dir=QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
val_loader = build_detection_test_loader(cfg, "my_dataset_QC")
inference_on_dataset(trainer.model, val_loader, evaluator)
print("A prediction is displayed")
dataset_QC_dicts = DatasetCatalog.get("my_dataset_QC")
for d in random.sample(dataset_QC_dicts, 1):
print("Ground Truth")
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=test_metadata, instance_mode=ColorMode.SEGMENTATION, scale=0.5)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
print("A prediction is displayed")
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=test_metadata,
instance_mode=ColorMode.SEGMENTATION,
scale=0.5
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
cell_ran_QC_QC_dataset = 1
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If an older model needs to be used, please untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contains the images that you want to predict using the network that you will train.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the prediction will be saved, then play the cell to predict output on your unseen images.
#@markdown ###Path to data to analyse and where predicted output should be saved:
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder as well as the location of your training dataset:
#@markdown ####Path to trained model to be assessed:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = Prediction_model_path+'/'+Prediction_model_name+'/'
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
print(bcolors.WARNING +'!! WARNING: The chosen model does not exist !!')
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Here we will load the label file
list_of_labels_predictions =[]
with open(full_Prediction_model_path+'labels.txt', newline='') as csvfile:
reader = csv.reader(csvfile)
for row in csv.reader(csvfile):
list_of_labels_predictions.append(row[0])
#Here we create a list of color
color_list = []
for i in range(len(list_of_labels_predictions)):
color = list(np.random.choice(range(256), size=3))
color_list.append(color)
#Activate the pretrained model.
# Create config
cfg = get_cfg()
cfg.merge_from_file(full_Prediction_model_path+"config.yaml")
cfg.MODEL.WEIGHTS = os.path.join(full_Prediction_model_path, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model
# Create predictor
predictor = DefaultPredictor(cfg)
#Load the metadata from the prediction file
prediction_metadata = Metadata()
prediction_metadata.set(thing_classes = list_of_labels_predictions)
prediction_metadata.set(thing_color = color_list)
start = datetime.now()
validation_folder = Path(Data_folder)
for i, file in enumerate(validation_folder.glob("*.png")):
# this loop opens the .png files from the val-folder, creates a dict with the file
# information, plots visualizations and saves the result as .pkl files.
file = str(file)
file_name = file.split("/")[-1]
im = cv2.imread(file)
#Prediction are done here
outputs = predictor(im)
#here we extract the results into numpy arrays
Classes_predictions = outputs["instances"].pred_classes.cpu().data.numpy()
boxes_predictions = outputs["instances"].pred_boxes.tensor.cpu().numpy()
Score_predictions = outputs["instances"].scores.cpu().data.numpy()
#here we save the results into a csv file
prediction_csv = Result_folder+"/"+file_name+"_predictions.csv"
with open(prediction_csv, 'w') as f:
writer = csv.writer(f)
writer.writerow(['x1','y1','x2','y2','box width','box height', 'class', 'score' ])
for i in range(len(boxes_predictions)):
x1 = boxes_predictions[i][0]
y1 = boxes_predictions[i][1]
x2 = boxes_predictions[i][2]
y2 = boxes_predictions[i][3]
box_width = x2 - x1
box_height = y2 -y1
writer.writerow([str(x1), str(y1), str(x2), str(y2), str(box_width), str(box_height), str(list_of_labels_predictions[Classes_predictions[i]]), Score_predictions[i]])
# The last example is displayed
v = Visualizer(im, metadata=prediction_metadata, instance_mode=ColorMode.SEGMENTATION, scale=1)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
plt.figure(figsize=(20,20))
plt.imshow(v.get_image()[:, :, ::-1])
plt.axis('off');
plt.savefig(Result_folder+"/"+file_name)
print("Time needed for inferencing:", datetime.now() - start)
###Output
_____no_output_____ |
Contextual-Policy.ipynb | ###Markdown
Simple Reinforcement Learning in Tensorflow Part 1.5: The Contextual BanditsThis tutorial contains a simple example of how to build a policy-gradient based agent that can solve the contextual bandit problem. For more information, see this [Medium post](https://medium.com/p/bff01d1aad9c).For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, [DeepRL-Agents](https://github.com/awjuliani/DeepRL-Agents).
###Code
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
###Output
_____no_output_____
###Markdown
The Contextual BanditsHere we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
###Code
class contextual_bandit():
def __init__(self):
self.state = 0
#List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.
self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])
self.num_bandits = self.bandits.shape[0]
self.num_actions = self.bandits.shape[1]
def getBandit(self):
self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.
return self.state
def pullArm(self,action):
#Get a random number.
bandit = self.bandits[self.state,action]
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
###Output
_____no_output_____
###Markdown
The Policy-Based AgentThe code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
###Code
class agent():
def __init__(self, lr, s_size,a_size):
#These lines established the feed-forward part of the network. The agent takes a state and produces an action.
self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
output = slim.fully_connected(state_in_OH,a_size,\
biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())
self.output = tf.reshape(output,[-1])
self.chosen_action = tf.argmax(self.output,0)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
self.responsible_weight = tf.slice(self.output,self.action_holder,[1])
self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)
self.update = optimizer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Training the Agent We will train our agent by getting a state from the environment, take an action, and recieve a reward. Using these three things, we can know how to properly update our network in order to more often choose actions given states that will yield the highest rewards over time.
###Code
tf.reset_default_graph() #Clear the Tensorflow graph.
cBandit = contextual_bandit() #Load the bandits.
myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
total_episodes = 10000 #Set total number of episodes to train agent on.
total_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.initialize_all_variables()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
s = cBandit.getBandit() #Get a state from the environment.
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(cBandit.num_actions)
else:
action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})
reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.
#Update the network.
feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}
_,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)
#Update our running tally of scores.
total_reward[s,action] += reward
if i % 500 == 0:
print "Mean reward for each of the " + str(cBandit.num_bandits) + " bandits: " + str(np.mean(total_reward,axis=1))
i+=1
for a in range(cBandit.num_bandits):
print "The agent thinks action " + str(np.argmax(ww[a])+1) + " for bandit " + str(a+1) + " is the most promising...."
if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):
print "...and it was right!"
else:
print "...and it was wrong!"
###Output
Mean reward for the 3 bandits: [ 0. -0.25 0. ]
Mean reward for the 3 bandits: [ 9. 42. 33.75]
Mean reward for the 3 bandits: [ 45.5 80. 67.75]
Mean reward for the 3 bandits: [ 86.25 116.75 101.25]
Mean reward for the 3 bandits: [ 122.5 153.25 139.5 ]
Mean reward for the 3 bandits: [ 161.75 186.25 179.25]
Mean reward for the 3 bandits: [ 201. 224.75 216. ]
Mean reward for the 3 bandits: [ 240.25 264. 250. ]
Mean reward for the 3 bandits: [ 280.25 301.75 285.25]
Mean reward for the 3 bandits: [ 317.75 340.25 322.25]
Mean reward for the 3 bandits: [ 356.5 377.5 359.25]
Mean reward for the 3 bandits: [ 396.25 415.25 394.75]
Mean reward for the 3 bandits: [ 434.75 451.5 430.5 ]
Mean reward for the 3 bandits: [ 476.75 490. 461.5 ]
Mean reward for the 3 bandits: [ 513.75 533.75 491.75]
Mean reward for the 3 bandits: [ 548.25 572. 527.5 ]
Mean reward for the 3 bandits: [ 587.5 610.75 562. ]
Mean reward for the 3 bandits: [ 628.75 644.25 600.25]
Mean reward for the 3 bandits: [ 665.75 684.75 634.75]
Mean reward for the 3 bandits: [ 705.75 719.75 668.25]
The agent thinks action 4 for bandit 1 is the most promising....
...and it was right!
The agent thinks action 2 for bandit 2 is the most promising....
...and it was right!
The agent thinks action 1 for bandit 3 is the most promising....
...and it was right!
###Markdown
Simple Reinforcement Learning in Tensorflow Part 1.5: The Contextual BanditsThis tutorial contains a simple example of how to build a policy-gradient based agent that can solve the contextual bandit problem. For more information, see this [Medium post](https://medium.com/p/bff01d1aad9c).For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, [DeepRL-Agents](https://github.com/awjuliani/DeepRL-Agents).
###Code
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
###Output
_____no_output_____
###Markdown
The Contextual BanditsHere we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
###Code
class contextual_bandit():
def __init__(self):
self.state = 0
#List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.
self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])
self.num_bandits = self.bandits.shape[0]
self.num_actions = self.bandits.shape[1]
def getBandit(self):
self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.
return self.state
def pullArm(self,action):
#Get a random number.
bandit = self.bandits[self.state,action]
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
###Output
_____no_output_____
###Markdown
The Policy-Based AgentThe code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
###Code
class agent():
def __init__(self, lr, s_size,a_size):
#These lines established the feed-forward part of the network. The agent takes a state and produces an action.
self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
output = slim.fully_connected(state_in_OH,a_size,\
biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())
self.output = tf.reshape(output,[-1])
self.chosen_action = tf.argmax(self.output,0)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
self.responsible_weight = tf.slice(self.output,self.action_holder,[1])
self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)
self.update = optimizer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Training the Agent We will train our agent by getting a state from the environment, take an action, and recieve a reward. Using these three things, we can know how to properly update our network in order to more often choose actions given states that will yield the highest rewards over time.
###Code
tf.reset_default_graph() #Clear the Tensorflow graph.
cBandit = contextual_bandit() #Load the bandits.
myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
total_episodes = 10000 #Set total number of episodes to train agent on.
total_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.global_variables_initializer()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
s = cBandit.getBandit() #Get a state from the environment.
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(cBandit.num_actions)
else:
action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})
reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.
#Update the network.
feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}
_,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)
#Update our running tally of scores.
total_reward[s,action] += reward
if i % 500 == 0:
print("Mean reward for each of the " + str(cBandit.num_bandits) + " bandits: " + str(np.mean(total_reward,axis=1)))
i+=1
for a in range(cBandit.num_bandits):
print("The agent thinks action " + str(np.argmax(ww[a])+1) + " for bandit " + str(a+1) + " is the most promising....")
if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):
print("...and it was right!")
else:
print("...and it was wrong!")
###Output
Mean reward for each of the 3 bandits: [ 0. 0. 0.25]
Mean reward for each of the 3 bandits: [ 26.5 38.25 35.5 ]
Mean reward for each of the 3 bandits: [ 68.25 75.25 70.75]
Mean reward for each of the 3 bandits: [ 104.25 112.25 107.25]
Mean reward for each of the 3 bandits: [ 142.5 147.5 145.75]
Mean reward for each of the 3 bandits: [ 181.5 185.75 178.5 ]
Mean reward for each of the 3 bandits: [ 215.5 223.75 220. ]
Mean reward for each of the 3 bandits: [ 256.5 260.75 249.5 ]
Mean reward for each of the 3 bandits: [ 293.5 300.25 287.5 ]
Mean reward for each of the 3 bandits: [ 330.25 341. 323.5 ]
Mean reward for each of the 3 bandits: [ 368.75 377. 359. ]
Mean reward for each of the 3 bandits: [ 411.5 408.75 395. ]
Mean reward for each of the 3 bandits: [ 447. 447. 429.75]
Mean reward for each of the 3 bandits: [ 484. 482.75 466. ]
Mean reward for each of the 3 bandits: [ 522.5 520. 504.75]
Mean reward for each of the 3 bandits: [ 560.25 557.75 538.25]
Mean reward for each of the 3 bandits: [ 597.75 596.25 574.75]
Mean reward for each of the 3 bandits: [ 636.5 630.5 611.25]
Mean reward for each of the 3 bandits: [ 675.25 670. 644.5 ]
Mean reward for each of the 3 bandits: [ 710.5 706.5 682.75]
The agent thinks action 4 for bandit 1 is the most promising....
...and it was right!
The agent thinks action 2 for bandit 2 is the most promising....
...and it was right!
The agent thinks action 1 for bandit 3 is the most promising....
...and it was right!
###Markdown
Simple Reinforcement Learning in Tensorflow Part 1.5: The Contextual BanditsThis tutorial contains a simple example of how to build a policy-gradient based agent that can solve the contextual bandit problem. For more information, see this [Medium post](https://medium.com/p/bff01d1aad9c).For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, [DeepRL-Agents](https://github.com/awjuliani/DeepRL-Agents).
###Code
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
###Output
_____no_output_____
###Markdown
The Contextual BanditsHere we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
###Code
class contextual_bandit():
def __init__(self):
self.state = 0
#List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.
self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])
self.num_bandits = self.bandits.shape[0]
self.num_actions = self.bandits.shape[1]
def getBandit(self):
self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.
return self.state
def pullArm(self,action):
#Get a random number.
bandit = self.bandits[self.state,action]
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
###Output
_____no_output_____
###Markdown
The Policy-Based AgentThe code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
###Code
class agent():
def __init__(self, lr, s_size,a_size):
#These lines established the feed-forward part of the network. The agent takes a state and produces an action.
self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
output = slim.fully_connected(state_in_OH,a_size,\
biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())
self.output = tf.reshape(output,[-1])
self.chosen_action = tf.argmax(self.output,0)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
self.responsible_weight = tf.slice(self.output,self.action_holder,[1])
self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)
self.update = optimizer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Training the Agent We will train our agent by getting a state from the environment, take an action, and recieve a reward. Using these three things, we can know how to properly update our network in order to more often choose actions given states that will yield the highest rewards over time.
###Code
tf.reset_default_graph() #Clear the Tensorflow graph.
cBandit = contextual_bandit() #Load the bandits.
myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
total_episodes = 10000 #Set total number of episodes to train agent on.
total_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.global_variables_initializer()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
s = cBandit.getBandit() #Get a state from the environment.
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(cBandit.num_actions)
else:
action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})
reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.
#Update the network.
feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}
_,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)
#Update our running tally of scores.
total_reward[s,action] += reward
if i % 500 == 0:
print "Mean reward for each of the " + str(cBandit.num_bandits) + " bandits: " + str(np.mean(total_reward,axis=1))
i+=1
for a in range(cBandit.num_bandits):
print "The agent thinks action " + str(np.argmax(ww[a])+1) + " for bandit " + str(a+1) + " is the most promising...."
if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):
print "...and it was right!"
else:
print "...and it was wrong!"
###Output
Mean reward for the 3 bandits: [ 0. -0.25 0. ]
Mean reward for the 3 bandits: [ 9. 42. 33.75]
Mean reward for the 3 bandits: [ 45.5 80. 67.75]
Mean reward for the 3 bandits: [ 86.25 116.75 101.25]
Mean reward for the 3 bandits: [ 122.5 153.25 139.5 ]
Mean reward for the 3 bandits: [ 161.75 186.25 179.25]
Mean reward for the 3 bandits: [ 201. 224.75 216. ]
Mean reward for the 3 bandits: [ 240.25 264. 250. ]
Mean reward for the 3 bandits: [ 280.25 301.75 285.25]
Mean reward for the 3 bandits: [ 317.75 340.25 322.25]
Mean reward for the 3 bandits: [ 356.5 377.5 359.25]
Mean reward for the 3 bandits: [ 396.25 415.25 394.75]
Mean reward for the 3 bandits: [ 434.75 451.5 430.5 ]
Mean reward for the 3 bandits: [ 476.75 490. 461.5 ]
Mean reward for the 3 bandits: [ 513.75 533.75 491.75]
Mean reward for the 3 bandits: [ 548.25 572. 527.5 ]
Mean reward for the 3 bandits: [ 587.5 610.75 562. ]
Mean reward for the 3 bandits: [ 628.75 644.25 600.25]
Mean reward for the 3 bandits: [ 665.75 684.75 634.75]
Mean reward for the 3 bandits: [ 705.75 719.75 668.25]
The agent thinks action 4 for bandit 1 is the most promising....
...and it was right!
The agent thinks action 2 for bandit 2 is the most promising....
...and it was right!
The agent thinks action 1 for bandit 3 is the most promising....
...and it was right!
###Markdown
Simple Reinforcement Learning in Tensorflow Part 1.5: The Contextual BanditsThis tutorial contains a simple example of how to build a policy-gradient based agent that can solve the contextual bandit problem. For more information, see this [Medium post](https://medium.com/p/bff01d1aad9c).For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, [DeepRL-Agents](https://github.com/awjuliani/DeepRL-Agents).
###Code
import tensorflow as tf
import tensorflow.contrib.slim as slim
import numpy as np
###Output
_____no_output_____
###Markdown
The Contextual BanditsHere we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.
###Code
class contextual_bandit():
def __init__(self):
self.state = 0
#List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.
self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])
self.num_bandits = self.bandits.shape[0]
self.num_actions = self.bandits.shape[1]
def getBandit(self):
self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.
return self.state
def pullArm(self,action):
#Get a random number.
bandit = self.bandits[self.state,action]
result = np.random.randn(1)
if result > bandit:
#return a positive reward.
return 1
else:
#return a negative reward.
return -1
###Output
_____no_output_____
###Markdown
The Policy-Based AgentThe code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
###Code
class agent():
def __init__(self, lr, s_size,a_size):
#These lines established the feed-forward part of the network. The agent takes a state and produces an action.
self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)
state_in_OH = slim.one_hot_encoding(self.state_in,s_size)
output = slim.fully_connected(state_in_OH,a_size,\
biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())
self.output = tf.reshape(output,[-1])
self.chosen_action = tf.argmax(self.output,0)
#The next six lines establish the training proceedure. We feed the reward and chosen action into the network
#to compute the loss, and use it to update the network.
self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)
self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)
self.responsible_weight = tf.slice(self.output,self.action_holder,[1])
self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)
self.update = optimizer.minimize(self.loss)
###Output
_____no_output_____
###Markdown
Training the Agent We will train our agent by getting a state from the environment, take an action, and recieve a reward. Using these three things, we can know how to properly update our network in order to more often choose actions given states that will yield the highest rewards over time.
###Code
tf.reset_default_graph() #Clear the Tensorflow graph.
cBandit = contextual_bandit() #Load the bandits.
myAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.
weights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.
total_episodes = 10000 #Set total number of episodes to train agent on.
total_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.
e = 0.1 #Set the chance of taking a random action.
init = tf.global_variables_initializer()
# Launch the tensorflow graph
with tf.Session() as sess:
sess.run(init)
i = 0
while i < total_episodes:
s = cBandit.getBandit() #Get a state from the environment.
#Choose either a random action or one from our network.
if np.random.rand(1) < e:
action = np.random.randint(cBandit.num_actions)
else:
action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})
reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.
#Update the network.
feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}
_,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)
#Update our running tally of scores.
total_reward[s,action] += reward
if i % 500 == 0:
print("Mean reward for each of the " + str(cBandit.num_bandits) + " bandits: " + str(np.mean(total_reward,axis=1)))
i+=1
for a in range(cBandit.num_bandits):
print("The agent thinks action " + str(np.argmax(ww[a])+1) + " for bandit " + str(a+1) + " is the most promising....")
if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):
print("...and it was right!")
else:
print("...and it was wrong!")
###Output
Mean reward for each of the 3 bandits: [ 0. 0. 0.25]
Mean reward for each of the 3 bandits: [ 26.5 38.25 35.5 ]
Mean reward for each of the 3 bandits: [ 68.25 75.25 70.75]
Mean reward for each of the 3 bandits: [ 104.25 112.25 107.25]
Mean reward for each of the 3 bandits: [ 142.5 147.5 145.75]
Mean reward for each of the 3 bandits: [ 181.5 185.75 178.5 ]
Mean reward for each of the 3 bandits: [ 215.5 223.75 220. ]
Mean reward for each of the 3 bandits: [ 256.5 260.75 249.5 ]
Mean reward for each of the 3 bandits: [ 293.5 300.25 287.5 ]
Mean reward for each of the 3 bandits: [ 330.25 341. 323.5 ]
Mean reward for each of the 3 bandits: [ 368.75 377. 359. ]
Mean reward for each of the 3 bandits: [ 411.5 408.75 395. ]
Mean reward for each of the 3 bandits: [ 447. 447. 429.75]
Mean reward for each of the 3 bandits: [ 484. 482.75 466. ]
Mean reward for each of the 3 bandits: [ 522.5 520. 504.75]
Mean reward for each of the 3 bandits: [ 560.25 557.75 538.25]
Mean reward for each of the 3 bandits: [ 597.75 596.25 574.75]
Mean reward for each of the 3 bandits: [ 636.5 630.5 611.25]
Mean reward for each of the 3 bandits: [ 675.25 670. 644.5 ]
Mean reward for each of the 3 bandits: [ 710.5 706.5 682.75]
The agent thinks action 4 for bandit 1 is the most promising....
...and it was right!
The agent thinks action 2 for bandit 2 is the most promising....
...and it was right!
The agent thinks action 1 for bandit 3 is the most promising....
...and it was right!
|
arrays_strings/compress_alt/better_compress_solution.ipynb | ###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Can you use additional data structures? * Yes* Is this case sensitive? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
from nose.tools import assert_equal
class TestCompress(object):
def test_compress(self, func):
assert_equal(func(None), None)
assert_equal(func(''), '')
assert_equal(func('AABBCC'), 'AABBCC')
assert_equal(func('AAABCCDDDD'), 'A3BCCD4')
assert_equal(func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'), 'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3')
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
Success: test_compress
###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar), second solution added by [janhak] (https://github.com/janhak). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is this case sensitive? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Algorithm: Split to blocks and compressLet us split the string first into blocks of identical characters and then compress it block by block.* Split the string to blocks * For each character in string * Add this character to block * If the next character is different * Return block * Erase the content of block* Compress block * If block consists of two or fewer characters * Return block * Else * Append length of the block to the first character and return* Compress string * Split the string to blocks * Compress blocks * Join compressed blocks * Return result if it is shorter than original stringComplexity:* Time: O(n)* Space: O(n)
###Code
def split_to_blocks(string):
block = ''
for char, next_char in zip(string, string[1:] + ' '):
block += char
if char is not next_char:
yield block
block = ''
def compress_block(block):
if len(block) <= 2:
return block
else:
return block[0] + str(len(block))
def compress_string(string):
if string is None or not string:
return string
compressed = (compress_block(block) for block in split_to_blocks(string))
result = ''.join(compressed)
return result if len(result) < len(string) else string
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
from nose.tools import assert_equal
class TestCompress(object):
def test_compress(self, func):
assert_equal(func(None), None)
assert_equal(func(''), '')
assert_equal(func('AABBCC'), 'AABBCC')
assert_equal(func('AAABCCDDDD'), 'A3BCCD4')
assert_equal(func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'), 'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3')
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
_____no_output_____
###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is this case sensitive? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
from nose.tools import assert_equal
class TestCompress(object):
def test_compress(self, func):
assert_equal(func(None), None)
assert_equal(func(''), '')
assert_equal(func('AABBCC'), 'AABBCC')
assert_equal(func('AAABCCDDDD'), 'A3BCCD4')
assert_equal(func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'), 'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3')
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
Success: test_compress
###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar), second solution added by [janhak](https://github.com/janhak). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is this case sensitive? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Algorithm: Split to blocks and compressLet us split the string first into blocks of identical characters and then compress it block by block.* Split the string to blocks * For each character in string * Add this character to block * If the next character is different * Return block * Erase the content of block* Compress block * If block consists of two or fewer characters * Return block * Else * Append length of the block to the first character and return* Compress string * Split the string to blocks * Compress blocks * Join compressed blocks * Return result if it is shorter than original stringComplexity:* Time: O(n)* Space: O(n)
###Code
def split_to_blocks(string):
block = ''
for char, next_char in zip(string, string[1:] + ' '):
block += char
if char is not next_char:
yield block
block = ''
def compress_block(block):
if len(block) <= 2:
return block
else:
return block[0] + str(len(block))
def compress_string(string):
if string is None or not string:
return string
compressed = (compress_block(block) for block in split_to_blocks(string))
result = ''.join(compressed)
return result if len(result) < len(string) else string
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
import unittest
class TestCompress(unittest.TestCase):
def test_compress(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(''), '')
self.assertEqual(func('AABBCC'), 'AABBCC')
self.assertEqual(func('AAABCCDDDD'), 'A3BCCD4')
self.assertEqual(
func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'),
'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3',
)
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
Success: test_compress
###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar), second solution added by [janhak](https://github.com/janhak). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is this case sensitive? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Algorithm: Split to blocks and compressLet us split the string first into blocks of identical characters and then compress it block by block.* Split the string to blocks * For each character in string * Add this character to block * If the next character is different * Return block * Erase the content of block* Compress block * If block consists of two or fewer characters * Return block * Else * Append length of the block to the first character and return* Compress string * Split the string to blocks * Compress blocks * Join compressed blocks * Return result if it is shorter than original stringComplexity:* Time: O(n)* Space: O(n)
###Code
def split_to_blocks(string):
block = ''
for char, next_char in zip(string, string[1:] + ' '):
block += char
if char is not next_char:
yield block
block = ''
def compress_block(block):
if len(block) <= 2:
return block
else:
return block[0] + str(len(block))
def compress_string(string):
if string is None or not string:
return string
compressed = (compress_block(block) for block in split_to_blocks(string))
result = ''.join(compressed)
return result if len(result) < len(string) else string
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
import unittest
class TestCompress(unittest.TestCase):
def test_compress(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(''), '')
self.assertEqual(func('AABBCC'), 'AABBCC')
self.assertEqual(func('AAABCCDDDD'), 'A3BCCD4')
self.assertEqual(
func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'),
'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3',
)
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
Success: test_compress
###Markdown
This notebook was prepared by [hashhar](https://github.com/hashhar), second solution added by [janhak](https://github.com/janhak). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Compress a string such that 'AAABCCDDDD' becomes 'A3BCCD4'. Only compress the string if it saves space.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is this case sensitive? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* None -> None* '' -> ''* 'AABBCC' -> 'AABBCC'* 'AAABCCDDDD' -> 'A3BCCD4' AlgorithmSince Python strings are immutable, we'll use a list of characters to build the compressed string representation. We'll then convert the list to a string.* Calculate the size of the compressed string * Note the constraint about compressing only if it saves space* If the compressed string size is >= string size, return string* Create compressed_string * For each char in string * If char is the same as last_char, increment count * Else * If the count is more than 2 * Append last_char to compressed_string * append count to compressed_string * count = 1 * last_char = char * If count is 1 * Append last_char to compressed_string * count = 1 * last_char = char * If count is 2 * Append last_char to compressed_string * Append last_char to compressed_string once more * count = 1 * last_char = char * Append last_char to compressed_string * Append count to compressed_string * Return compressed_stringComplexity:* Time: O(n)* Space: O(n) Code
###Code
def compress_string(string):
if string is None or len(string) == 0:
return string
# Calculate the size of the compressed string
size = 0
last_char = string[0]
for char in string:
if char != last_char:
size += 2
last_char = char
size += 2
# If the compressed string size is greater than
# or equal to string size, return original string
if size >= len(string):
return string
# Create compressed_string
# New objective:
# Single characters are to be left as is
# Double characters are to be left as are
compressed_string = list()
count = 0
last_char = string[0]
for char in string:
if char == last_char:
count += 1
else:
# Do the old compression tricks only if count exceeds two
if count > 2:
compressed_string.append(last_char)
compressed_string.append(str(count))
count = 1
last_char = char
# If count is either 1 or 2
else:
# If count is 1, leave the char as is
if count == 1:
compressed_string.append(last_char)
count = 1
last_char = char
# If count is 2, append the character twice
else:
compressed_string.append(last_char)
compressed_string.append(last_char)
count = 1
last_char = char
compressed_string.append(last_char)
compressed_string.append(str(count))
# Convert the characters in the list to a string
return "".join(compressed_string)
###Output
_____no_output_____
###Markdown
Algorithm: Split to blocks and compressLet us split the string first into blocks of identical characters and then compress it block by block.* Split the string to blocks * For each character in string * Add this character to block * If the next character is different * Return block * Erase the content of block* Compress block * If block consists of two or fewer characters * Return block * Else * Append length of the block to the first character and return* Compress string * Split the string to blocks * Compress blocks * Join compressed blocks * Return result if it is shorter than original stringComplexity:* Time: O(n)* Space: O(n)
###Code
def split_to_blocks(string):
block = ''
for char, next_char in zip(string, string[1:] + ' '):
block += char
if char is not next_char:
yield block
block = ''
def compress_block(block):
if len(block) <= 2:
return block
else:
return block[0] + str(len(block))
def compress_string(string):
if string is None or not string:
return string
compressed = (compress_block(block) for block in split_to_blocks(string))
result = ''.join(compressed)
return result if len(result) < len(string) else string
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_compress.py
import unittest
class TestCompress(unittest.TestCase):
def test_compress(self, func):
self.assertEqual(func(None), None)
self.assertEqual(func(''), '')
self.assertEqual(func('AABBCC'), 'AABBCC')
self.assertEqual(func('AAABCCDDDD'), 'A3BCCD4')
self.assertEqual(
func('aaBCCEFFFFKKMMMMMMP taaammanlaarrrr seeeeeeeeek tooo'),
'aaBCCEF4KKM6P ta3mmanlaar4 se9k to3',
)
print('Success: test_compress')
def main():
test = TestCompress()
test.test_compress(compress_string)
if __name__ == '__main__':
main()
%run -i test_compress.py
###Output
Success: test_compress
|
0.12/_downloads/plot_eog_artifact_histogram.ipynb | ###Markdown
Show EOG artifact timingCompute the distribution of timing for EOG artifacts.
###Code
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
###Output
_____no_output_____
###Markdown
Set parameters
###Code
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
events = mne.find_events(raw, 'STI 014')
eog_event_id = 512
eog_events = mne.preprocessing.find_eog_events(raw, eog_event_id)
raw.add_events(eog_events, 'STI 014')
# Read epochs
picks = mne.pick_types(raw.info, meg=False, eeg=False, stim=True, eog=False)
tmin, tmax = -0.2, 0.5
event_ids = {'AudL': 1, 'AudR': 2, 'VisL': 3, 'VisR': 4}
epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks)
# Get the stim channel data
pick_ch = mne.pick_channels(epochs.ch_names, ['STI 014'])[0]
data = epochs.get_data()[:, pick_ch, :].astype(int)
data = np.sum((data.astype(int) & 512) == 512, axis=0)
###Output
_____no_output_____
###Markdown
Plot EOG artifact distribution
###Code
plt.stem(1e3 * epochs.times, data)
plt.xlabel('Times (ms)')
plt.ylabel('Blink counts (from %s trials)' % len(epochs))
plt.show()
###Output
_____no_output_____ |
site/en-snapshot/datasets/overview.ipynb | ###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/data) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
`tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
To find out the `dict` key names and structure, look at the dataset documentation in [our catalog](https://www.tensorflow.org/datasets/catalog/overviewall_datasets). For example: [mnist documentation](https://www.tensorflow.org/datasets/catalog/mnist). As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Iterator[Tree[np.array]]` (`Tree` can be arbitrary nested `Dict`, `Tuple`)
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.This can be combined with `as_supervised=True` and `tfds.as_numpy` to get the the data as `(np.array, np.array)`:
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Be careful that your dataset can fit in memory, and that all examples have the same shape. Benchmark your datasetsBenchmarking a dataset is a simple `tfds.benchmark` call on any iterable (e.g. `tf.data.Dataset`, `tfds.as_numpy`,...).
###Code
ds = tfds.load('mnist', split='train')
ds = ds.batch(32).prefetch(1)
tfds.benchmark(ds, batch_size=32)
tfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching
###Output
_____no_output_____
###Markdown
* Do not forget to normalize the results per batch size with the `batch_size=` kwarg.* In the summary, the first warmup batch is separated from the other ones to capture `tf.data.Dataset` extra setup time (e.g. buffers initialization,...).* Notice how the second iteration is much faster due to [TFDS auto-caching](https://www.tensorflow.org/datasets/performancesauto-caching).* `tfds.benchmark` returns a `tfds.core.BenchmarkResult` which can be inspected for further analysis. Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines (tip: use `tfds.benchmark(ds)` to benchmark your datasets). Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examples`tfds.show_examples` returns a `matplotlib.figure.Figure` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, uses `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a datasetThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
`tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Manual download (if download fails)If download fails for some reason (e.g. offline,...). You can always manually download the data yourself and place it in the `manual_dir` (defaults to `~/tensorflow_datasets/download/manual/`.To find out which urls to download, look into: * For new datasets (implemented as folder): [`tensorflow_datasets/`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/)`//checksums.tsv`. For example: [`tensorflow_datasets/text/bool_q/checksums.tsv`](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/text/bool_q/checksums.tsv) * For old datasets: [`tensorflow_datasets/url_checksums/.txt`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/url_checksums) Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, uses `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a datasetThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a datasetThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `tensorflow-datasets`: The stable version, released every few months.* `tfds-nightly`: Released every day, contains the last versions of the datasets.To install:```pip install tensorflow-datasets```Note: TFDS requires `tensorflow` (or `tensorflow-gpu`) to be already installed. TFDS support TF >=1.15.This colab uses `tfds-nightly` and TF 2.
###Code
!pip install -q tensorflow>=2 tfds-nightly matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, uses `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a datasetThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tupleBy using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpyUses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.TensorBy using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualize a datasetVisualize datasets with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Manual download (if download fails)If download fails for some reason (e.g. offline,...). You can always manually download the data yourself and place it in the `manual_dir` (defaults to `~/tensorflow_datasets/download/manual/`.To find out which urls to download, look into: * For new datasets (implemented as folder): [`tensorflow_datasets/`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/)`//checksums.tsv`. For example: [`tensorflow_datasets/text/bool_q/checksums.tsv`](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/text/bool_q/checksums.tsv) * For old datasets: [`tensorflow_datasets/url_checksums/.txt`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/url_checksums) `tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
`tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
To find out the `dict` key names and structure, look at the dataset documentation in [our catalog](https://www.tensorflow.org/datasets/catalog/overviewall_datasets). For example: [mnist documentation](https://www.tensorflow.org/datasets/catalog/mnist). As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Iterator[Tree[np.array]]` (`Tree` can be arbitrary nested `Dict`, `Tuple`)
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.This can be combined with `as_supervised=True` and `tfds.as_numpy` to get the the data as `(np.array, np.array)`:
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Be careful that your dataset can fit in memory, and that all examples have the same shape. Benchmark your datasetsBenchmarking a dataset is a simple `tfds.benchmark` call on any iterable (e.g. `tf.data.Dataset`, `tfds.as_numpy`,...).
###Code
ds = tfds.load('mnist', split='train')
ds = ds.batch(32).prefetch(1)
tfds.benchmark(ds, batch_size=32)
tfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching
###Output
_____no_output_____
###Markdown
* Do not forget to normalize the results per batch size with the `batch_size=` kwarg.* In the summary, the first warmup batch is separated from the other ones to capture `tf.data.Dataset` extra setup time (e.g. buffers initialization,...).* Notice how the second iteration is much faster due to [TFDS auto-caching](https://www.tensorflow.org/datasets/performancesauto-caching).* `tfds.benchmark` returns a `tfds.core.BenchmarkResult` which can be inspected for further analysis. Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines (tip: use `tfds.benchmark(ds)` to benchmark your datasets). Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examples`tfds.show_examples` returns a `matplotlib.figure.Figure` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/data) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
`tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
To find out the `dict` key names and structure, look at the dataset documentation in [our catalog](https://www.tensorflow.org/datasets/catalog/overviewall_datasets). For example: [mnist documentation](https://www.tensorflow.org/datasets/catalog/mnist). As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Iterator[Tree[np.array]]` (`Tree` can be arbitrary nested `Dict`, `Tuple`)
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.This can be combined with `as_supervised=True` and `tfds.as_numpy` to get the the data as `(np.array, np.array)`:
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Be careful that your dataset can fit in memory, and that all examples have the same shape. Benchmark your datasetsBenchmarking a dataset is a simple `tfds.benchmark` call on any iterable (e.g. `tf.data.Dataset`, `tfds.as_numpy`,...).
###Code
ds = tfds.load('mnist', split='train')
ds = ds.batch(32).prefetch(1)
tfds.benchmark(ds, batch_size=32)
tfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching
###Output
_____no_output_____
###Markdown
* Do not forget to normalize the results per batch size with the `batch_size=` kwarg.* In the summary, the first warmup batch is separated from the other ones to capture `tf.data.Dataset` extra setup time (e.g. buffers initialization,...).* Notice how the second iteration is much faster due to [TFDS auto-caching](https://www.tensorflow.org/datasets/performancesauto-caching).* `tfds.benchmark` returns a `tfds.core.BenchmarkResult` which can be inspected for further analysis. Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines (tip: use `tfds.benchmark(ds)` to benchmark your datasets). Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examples`tfds.show_examples` returns a `matplotlib.figure.Figure` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a datasetThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Generator[np.array]`
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.`tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).Be careful that your dataset can fit in memory, and that all examples have the same shape.
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examplesFor image with `tfds.show_examples` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____
###Markdown
TensorFlow DatasetsTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first. Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0 View on TensorFlow.org Run in Google Colab View source on GitHub InstallationTFDS exists in two packages:* `pip install tensorflow-datasets`: The stable version, released every few months.* `pip install tfds-nightly`: Released every day, contains the last versions of the datasets.This colab uses `tfds-nightly`:
###Code
!pip install -q tfds-nightly tensorflow matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
###Output
_____no_output_____
###Markdown
Find available datasetsAll dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, use `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
###Code
tfds.list_builders()
###Output
_____no_output_____
###Markdown
Load a dataset tfds.loadThe easiest way of loading a dataset is `tfds.load`. It will:1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.2. Load the `tfrecord` and create the `tf.data.Dataset`.
###Code
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
###Output
_____no_output_____
###Markdown
Some common arguments:* `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).* `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).* `data_dir=`: Location where the dataset is saved (defaults to `~/tensorflow_datasets/`)* `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata* `download=False`: Disable download tfds.builder`tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
###Output
_____no_output_____
###Markdown
`tfds build` CLIIf you want to generate a specific dataset, you can use the [`tfds` command line](https://www.tensorflow.org/datasets/cli). For example:```shtfds build mnist```See [the doc](https://www.tensorflow.org/datasets/cli) for available flags. Iterate over a dataset As dictBy default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
###Code
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
###Output
_____no_output_____
###Markdown
To find out the `dict` key names and structure, look at the dataset documentation in [our catalog](https://www.tensorflow.org/datasets/catalog/overviewall_datasets). For example: [mnist documentation](https://www.tensorflow.org/datasets/catalog/mnist). As tuple (`as_supervised=True`)By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
###Output
_____no_output_____
###Markdown
As numpy (`tfds.as_numpy`)Uses `tfds.as_numpy` to convert:* `tf.Tensor` -> `np.array`* `tf.data.Dataset` -> `Iterator[Tree[np.array]]` (`Tree` can be arbitrary nested `Dict`, `Tuple`)
###Code
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
###Output
_____no_output_____
###Markdown
As batched tf.Tensor (`batch_size=-1`)By using `batch_size=-1`, you can load the full dataset in a single batch.This can be combined with `as_supervised=True` and `tfds.as_numpy` to get the the data as `(np.array, np.array)`:
###Code
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
###Output
_____no_output_____
###Markdown
Be careful that your dataset can fit in memory, and that all examples have the same shape. Build end-to-end pipelineTo go further, you can look:* Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).* Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines. Visualization tfds.as_dataframe`tf.data.Dataset` objects can be converted to [`pandas.DataFrame`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) with `tfds.as_dataframe` to be visualized on [Colab](https://colab.research.google.com).* Add the `tfds.core.DatasetInfo` as second argument of `tfds.as_dataframe` to visualize images, audio, texts, videos,...* Use `ds.take(x)` to only display the first `x` examples. `pandas.DataFrame` will load the full dataset in-memory, and can be very expensive to display.
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
tfds.as_dataframe(ds.take(4), info)
###Output
_____no_output_____
###Markdown
tfds.show_examples`tfds.show_examples` returns a `matplotlib.figure.Figure` (only image datasets supported now):
###Code
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
###Output
_____no_output_____
###Markdown
Access the dataset metadataAll builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.It can be accessed through:* The `tfds.load` API:
###Code
ds, info = tfds.load('mnist', with_info=True)
###Output
_____no_output_____
###Markdown
* The `tfds.core.DatasetBuilder` API:
###Code
builder = tfds.builder('mnist')
info = builder.info
###Output
_____no_output_____
###Markdown
The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
###Code
print(info)
###Output
_____no_output_____
###Markdown
Features metadata (label names, image shape,...)Access the `tfds.features.FeatureDict`:
###Code
info.features
###Output
_____no_output_____
###Markdown
Number of classes, label names:
###Code
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
###Output
_____no_output_____
###Markdown
Shapes, dtypes:
###Code
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
###Output
_____no_output_____
###Markdown
Split metadata (e.g. split names, number of examples,...)Access the `tfds.core.SplitDict`:
###Code
print(info.splits)
###Output
_____no_output_____
###Markdown
Available splits:
###Code
print(list(info.splits.keys()))
###Output
_____no_output_____
###Markdown
Get info on individual split:
###Code
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
###Output
_____no_output_____
###Markdown
It also works with the subsplit API:
###Code
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
###Output
_____no_output_____ |
11_training_deep_neural_networks.ipynb | ###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 970us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 907us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 1s 867us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 900us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 876us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 1s 849us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 1s 836us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 1s 830us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 998us/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 950us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 952us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 950us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 956us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 956us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
leaky_relu = keras.layers.LeakyReLU(alpha=0.2)
layer = keras.layers.Dense(10, activation=leaky_relu)
layer.activation
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation=leaky_relu),
keras.layers.Dense(100, activation=leaky_relu),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 64us/sample - loss: 1.3979 - accuracy: 0.5948 - val_loss: 0.9369 - val_accuracy: 0.7162
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.8333 - accuracy: 0.7341 - val_loss: 0.7392 - val_accuracy: 0.7638
Epoch 3/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.7068 - accuracy: 0.7711 - val_loss: 0.6561 - val_accuracy: 0.7906
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6417 - accuracy: 0.7889 - val_loss: 0.6052 - val_accuracy: 0.8088
Epoch 5/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5988 - accuracy: 0.8019 - val_loss: 0.5716 - val_accuracy: 0.8166
Epoch 6/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.5686 - accuracy: 0.8118 - val_loss: 0.5465 - val_accuracy: 0.8234
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5460 - accuracy: 0.8181 - val_loss: 0.5273 - val_accuracy: 0.8314
Epoch 8/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5281 - accuracy: 0.8229 - val_loss: 0.5108 - val_accuracy: 0.8370
Epoch 9/10
55000/55000 [==============================] - 3s 60us/sample - loss: 0.5137 - accuracy: 0.8261 - val_loss: 0.4985 - val_accuracy: 0.8398
Epoch 10/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.5018 - accuracy: 0.8289 - val_loss: 0.4901 - val_accuracy: 0.8382
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch], values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
#with keras.backend.learning_phase_scope(1): # TODO: check https://github.com/tensorflow/tensorflow/issues/25754
# history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
with keras.backend.learning_phase_scope(1): # TODO: check https://github.com/tensorflow/tensorflow/issues/25754
y_probas = np.stack([model.predict(X_test_scaled) for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 50us/sample - loss: 1.2806 - accuracy: 0.6250 - val_loss: 0.8883 - val_accuracy: 0.7152
Epoch 2/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.7954 - accuracy: 0.7373 - val_loss: 0.7135 - val_accuracy: 0.7648
Epoch 3/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6816 - accuracy: 0.7727 - val_loss: 0.6356 - val_accuracy: 0.7882
Epoch 4/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6215 - accuracy: 0.7935 - val_loss: 0.5922 - val_accuracy: 0.8012
Epoch 5/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5830 - accuracy: 0.8081 - val_loss: 0.5596 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5553 - accuracy: 0.8155 - val_loss: 0.5338 - val_accuracy: 0.8240
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5340 - accuracy: 0.8221 - val_loss: 0.5157 - val_accuracy: 0.8310
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5172 - accuracy: 0.8265 - val_loss: 0.5035 - val_accuracy: 0.8336
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5036 - accuracy: 0.8299 - val_loss: 0.4950 - val_accuracy: 0.8354
Epoch 10/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.4922 - accuracy: 0.8324 - val_loss: 0.4797 - val_accuracy: 0.8430
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.6569 - accuracy: 0.7750 - val_loss: 0.4875 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4584 - accuracy: 0.8391 - val_loss: 0.4390 - val_accuracy: 0.8476
Epoch 3/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.4124 - accuracy: 0.8541 - val_loss: 0.4102 - val_accuracy: 0.8570
Epoch 4/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3842 - accuracy: 0.8643 - val_loss: 0.3893 - val_accuracy: 0.8652
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3641 - accuracy: 0.8707 - val_loss: 0.3736 - val_accuracy: 0.8678
Epoch 6/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3456 - accuracy: 0.8781 - val_loss: 0.3652 - val_accuracy: 0.8726
Epoch 7/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3318 - accuracy: 0.8818 - val_loss: 0.3596 - val_accuracy: 0.8768
Epoch 8/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.3180 - accuracy: 0.8862 - val_loss: 0.3845 - val_accuracy: 0.8602
Epoch 9/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3062 - accuracy: 0.8893 - val_loss: 0.3824 - val_accuracy: 0.8660
Epoch 10/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2938 - accuracy: 0.8934 - val_loss: 0.3516 - val_accuracy: 0.8742
Epoch 11/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2838 - accuracy: 0.8975 - val_loss: 0.3609 - val_accuracy: 0.8740
Epoch 12/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2716 - accuracy: 0.9025 - val_loss: 0.3843 - val_accuracy: 0.8666
Epoch 13/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2541 - accuracy: 0.9091 - val_loss: 0.3282 - val_accuracy: 0.8844
Epoch 14/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2390 - accuracy: 0.9139 - val_loss: 0.3336 - val_accuracy: 0.8838
Epoch 15/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2273 - accuracy: 0.9177 - val_loss: 0.3283 - val_accuracy: 0.8884
Epoch 16/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2156 - accuracy: 0.9234 - val_loss: 0.3288 - val_accuracy: 0.8862
Epoch 17/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2062 - accuracy: 0.9265 - val_loss: 0.3215 - val_accuracy: 0.8896
Epoch 18/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1973 - accuracy: 0.9299 - val_loss: 0.3284 - val_accuracy: 0.8912
Epoch 19/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1892 - accuracy: 0.9344 - val_loss: 0.3229 - val_accuracy: 0.8904
Epoch 20/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1822 - accuracy: 0.9366 - val_loss: 0.3196 - val_accuracy: 0.8902
Epoch 21/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1758 - accuracy: 0.9388 - val_loss: 0.3184 - val_accuracy: 0.8940
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3221 - val_accuracy: 0.8912
Epoch 23/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1657 - accuracy: 0.9444 - val_loss: 0.3173 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.1630 - accuracy: 0.9457 - val_loss: 0.3162 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1610 - accuracy: 0.9464 - val_loss: 0.3169 - val_accuracy: 0.8942
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup This project requires Python 3.7 or above:
###Code
import sys
assert sys.version_info >= (3, 7)
###Output
_____no_output_____
###Markdown
It also requires Scikit-Learn ≥ 1.0.1:
###Code
import sklearn
assert sklearn.__version__ >= "1.0.1"
###Output
_____no_output_____
###Markdown
And TensorFlow ≥ 2.8:
###Code
import tensorflow as tf
assert tf.__version__ >= "2.8.0"
###Output
_____no_output_____
###Markdown
As we did in previous chapters, let's define the default font sizes to make the figures prettier:
###Code
import matplotlib.pyplot as plt
plt.rc('font', size=14)
plt.rc('axes', labelsize=14, titlesize=14)
plt.rc('legend', fontsize=14)
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
###Output
_____no_output_____
###Markdown
And let's create the `images/deep` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:
###Code
from pathlib import Path
IMAGES_PATH = Path() / "images" / "deep"
IMAGES_PATH.mkdir(parents=True, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = IMAGES_PATH / f"{fig_id}.{fig_extension}"
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
# extra code – this cell generates and saves Figure 11–1
import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, sigmoid(z), "b-", linewidth=2,
label=r"$\sigma(z) = \dfrac{1}{1+e^{-z}}$")
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props,
fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props,
fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props,
fontsize=14, ha="center")
plt.grid(True)
plt.axis([-5, 5, -0.2, 1.2])
plt.xlabel("$z$")
plt.legend(loc="upper left", fontsize=16)
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Xavier and He Initialization
###Code
dense = tf.keras.layers.Dense(50, activation="relu",
kernel_initializer="he_normal")
he_avg_init = tf.keras.initializers.VarianceScaling(scale=2., mode="fan_avg",
distribution="uniform")
dense = tf.keras.layers.Dense(50, activation="sigmoid",
kernel_initializer=he_avg_init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
# extra code – this cell generates and saves Figure 11–2
def leaky_relu(z, alpha):
return np.maximum(alpha * z, z)
z = np.linspace(-5, 5, 200)
plt.plot(z, leaky_relu(z, 0.1), "b-", linewidth=2, label=r"$LeakyReLU(z) = max(\alpha z, z)$")
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-1, 3.7], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.3), arrowprops=props,
fontsize=14, ha="center")
plt.xlabel("$z$")
plt.axis([-5, 5, -1, 3.7])
plt.gca().set_aspect("equal")
plt.legend()
save_fig("leaky_relu_plot")
plt.show()
leaky_relu = tf.keras.layers.LeakyReLU(alpha=0.2) # defaults to alpha=0.3
dense = tf.keras.layers.Dense(50, activation=leaky_relu,
kernel_initializer="he_normal")
model = tf.keras.models.Sequential([
# [...] # more layers
tf.keras.layers.Dense(50, kernel_initializer="he_normal"), # no activation
tf.keras.layers.LeakyReLU(alpha=0.2), # activation as a separate layer
# [...] # more layers
])
###Output
2021-12-16 11:22:41.636848: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
###Markdown
ELU Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer, and use He initialization:
###Code
dense = tf.keras.layers.Dense(50, activation="elu",
kernel_initializer="he_normal")
###Output
_____no_output_____
###Markdown
SELU By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too, and other constraints are respected, as explained in the book). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
# extra code – this cell generates and saves Figure 11–3
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1 / np.sqrt(2)) * np.exp(1 / 2) - 1)
scale_0_1 = (
(1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e))
* np.sqrt(2 * np.pi)
* (
2 * erfc(np.sqrt(2)) * np.e ** 2
+ np.pi * erfc(1 / np.sqrt(2)) ** 2 * np.e
- 2 * (2 + np.pi) * erfc(1 / np.sqrt(2)) * np.sqrt(np.e)
+ np.pi
+ 2
) ** (-1 / 2)
)
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
z = np.linspace(-5, 5, 200)
plt.plot(z, elu(z), "b-", linewidth=2, label=r"ELU$_\alpha(z) = \alpha (e^z - 1)$ if $z < 0$, else $z$")
plt.plot(z, selu(z), "r--", linewidth=2, label=r"SELU$(z) = 1.05 \, $ELU$_{1.67}(z)$")
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k:', linewidth=2)
plt.plot([-5, 5], [-1.758, -1.758], 'k:', linewidth=2)
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.axis([-5, 5, -2.2, 3.2])
plt.xlabel("$z$")
plt.gca().set_aspect("equal")
plt.legend()
save_fig("elu_selu_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Using SELU is straightforward:
###Code
dense = tf.keras.layers.Dense(50, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
**Extra material – an example of a self-regularized network using SELU**Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))
for layer in range(100):
model.add(tf.keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
fashion_mnist = tf.keras.datasets.fashion_mnist.load_data()
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist
X_train, y_train = X_train_full[:-5000], y_train_full[:-5000]
X_valid, y_valid = X_train_full[-5000:], y_train_full[-5000:]
X_train, X_valid, X_test = X_train / 255, X_valid / 255, X_test / 255
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
2021-12-16 11:22:44.499697: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
###Markdown
The network managed to learn, despite how deep it is. Now look at what happens if we try to use the ReLU activation function instead:
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))
for layer in range(100):
model.add(tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.6932 - accuracy: 0.3071 - val_loss: 1.2058 - val_accuracy: 0.5106
Epoch 2/5
1719/1719 [==============================] - 11s 6ms/step - loss: 1.1132 - accuracy: 0.5297 - val_loss: 0.9682 - val_accuracy: 0.5718
Epoch 3/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.9480 - accuracy: 0.6117 - val_loss: 1.0552 - val_accuracy: 0.5102
Epoch 4/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.9763 - accuracy: 0.6003 - val_loss: 0.7764 - val_accuracy: 0.7070
Epoch 5/5
1719/1719 [==============================] - 11s 6ms/step - loss: 0.7892 - accuracy: 0.6875 - val_loss: 0.7485 - val_accuracy: 0.7054
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. GELU, Swish and Mish
###Code
# extra code – this cell generates and saves Figure 11–4
def swish(z, beta=1):
return z * sigmoid(beta * z)
def approx_gelu(z):
return swish(z, beta=1.702)
def softplus(z):
return np.log(1 + np.exp(z))
def mish(z):
return z * np.tanh(softplus(z))
z = np.linspace(-4, 2, 200)
beta = 0.6
plt.plot(z, approx_gelu(z), "b-", linewidth=2,
label=r"GELU$(z) = z\,\Phi(z)$")
plt.plot(z, swish(z), "r--", linewidth=2,
label=r"Swish$(z) = z\,\sigma(z)$")
plt.plot(z, swish(z, beta), "r:", linewidth=2,
label=fr"Swish$_{{\beta={beta}}}(z)=z\,\sigma({beta}\,z)$")
plt.plot(z, mish(z), "g:", linewidth=3,
label=fr"Mish$(z) = z\,\tanh($softplus$(z))$")
plt.plot([-4, 2], [0, 0], 'k-')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.axis([-4, 2, -1, 2])
plt.gca().set_aspect("equal")
plt.xlabel("$z$")
plt.legend(loc="upper left")
save_fig("gelu_swish_mish_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Batch Normalization
###Code
# extra code - clear the name counters and set the random seed
tf.keras.backend.clear_session()
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(300, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation="softmax")
])
model.summary()
[(var.name, var.trainable) for var in model.layers[1].variables]
# extra code – just show that the model works! 😊
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics="accuracy")
model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5559 - accuracy: 0.8094 - val_loss: 0.4016 - val_accuracy: 0.8558
Epoch 2/2
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4083 - accuracy: 0.8561 - val_loss: 0.3676 - val_accuracy: 0.8650
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
# extra code - clear the name counters and set the random seed
tf.keras.backend.clear_session()
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(300, kernel_initializer="he_normal", use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation("relu"),
tf.keras.layers.Dense(100, kernel_initializer="he_normal", use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation("relu"),
tf.keras.layers.Dense(10, activation="softmax")
])
# extra code – just show that the model works! 😊
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics="accuracy")
model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 1ms/step - loss: 0.6063 - accuracy: 0.7993 - val_loss: 0.4296 - val_accuracy: 0.8418
Epoch 2/2
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4275 - accuracy: 0.8500 - val_loss: 0.3752 - val_accuracy: 0.8646
###Markdown
Gradient Clipping All `tf.keras.optimizers` accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = tf.keras.optimizers.SGD(clipvalue=1.0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer)
optimizer = tf.keras.optimizers.SGD(clipnorm=1.0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for T-shirts/tops and pullovers (classes 0 and 2).* `X_train_B`: a much smaller training set of just the first 200 images of T-shirts/tops and pullovers.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (trousers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots) are somewhat similar to classes in set B (T-shirts/tops and pullovers). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the chapter 14).
###Code
# extra code – split Fashion MNIST into tasks A and B, then train and save
# model A to "my_model_A".
pos_class_id = class_names.index("Pullover")
neg_class_id = class_names.index("T-shirt/top")
def split_dataset(X, y):
y_for_B = (y == pos_class_id) | (y == neg_class_id)
y_A = y[~y_for_B]
y_B = (y[y_for_B] == pos_class_id).astype(np.float32)
old_class_ids = list(set(range(10)) - set([neg_class_id, pos_class_id]))
for old_class_id, new_class_id in zip(old_class_ids, range(8)):
y_A[y_A == old_class_id] = new_class_id # reorder class ids for A
return ((X[~y_for_B], y_A), (X[y_for_B], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
tf.random.set_seed(42)
model_A = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(8, activation="softmax")
])
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A")
# extra code – train and evaluate model B, without reusing model A
tf.random.set_seed(42)
model_B = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(1, activation="sigmoid")
])
model_B.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.evaluate(X_test_B, y_test_B)
###Output
Epoch 1/20
7/7 [==============================] - 0s 20ms/step - loss: 0.7167 - accuracy: 0.5450 - val_loss: 0.7052 - val_accuracy: 0.5272
Epoch 2/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6805 - accuracy: 0.5800 - val_loss: 0.6758 - val_accuracy: 0.6004
Epoch 3/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6532 - accuracy: 0.6650 - val_loss: 0.6530 - val_accuracy: 0.6746
Epoch 4/20
7/7 [==============================] - 0s 6ms/step - loss: 0.6289 - accuracy: 0.7150 - val_loss: 0.6317 - val_accuracy: 0.7517
Epoch 5/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6079 - accuracy: 0.7800 - val_loss: 0.6105 - val_accuracy: 0.8091
Epoch 6/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5866 - accuracy: 0.8400 - val_loss: 0.5913 - val_accuracy: 0.8447
Epoch 7/20
7/7 [==============================] - 0s 6ms/step - loss: 0.5670 - accuracy: 0.8850 - val_loss: 0.5728 - val_accuracy: 0.8833
Epoch 8/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5499 - accuracy: 0.8900 - val_loss: 0.5571 - val_accuracy: 0.8971
Epoch 9/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5331 - accuracy: 0.9150 - val_loss: 0.5427 - val_accuracy: 0.9050
Epoch 10/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5180 - accuracy: 0.9250 - val_loss: 0.5290 - val_accuracy: 0.9080
Epoch 11/20
7/7 [==============================] - 0s 6ms/step - loss: 0.5038 - accuracy: 0.9350 - val_loss: 0.5160 - val_accuracy: 0.9189
Epoch 12/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4903 - accuracy: 0.9350 - val_loss: 0.5032 - val_accuracy: 0.9228
Epoch 13/20
7/7 [==============================] - 0s 7ms/step - loss: 0.4770 - accuracy: 0.9400 - val_loss: 0.4925 - val_accuracy: 0.9228
Epoch 14/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4656 - accuracy: 0.9450 - val_loss: 0.4817 - val_accuracy: 0.9258
Epoch 15/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4546 - accuracy: 0.9550 - val_loss: 0.4708 - val_accuracy: 0.9298
Epoch 16/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4435 - accuracy: 0.9550 - val_loss: 0.4608 - val_accuracy: 0.9318
Epoch 17/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4330 - accuracy: 0.9600 - val_loss: 0.4510 - val_accuracy: 0.9337
Epoch 18/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4226 - accuracy: 0.9600 - val_loss: 0.4406 - val_accuracy: 0.9367
Epoch 19/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4119 - accuracy: 0.9600 - val_loss: 0.4311 - val_accuracy: 0.9377
Epoch 20/20
7/7 [==============================] - 0s 7ms/step - loss: 0.4025 - accuracy: 0.9600 - val_loss: 0.4225 - val_accuracy: 0.9367
63/63 [==============================] - 0s 728us/step - loss: 0.4317 - accuracy: 0.9185
###Markdown
Model B reaches 91.85% accuracy on the test set. Now let's try reusing the pretrained model A.
###Code
model_A = tf.keras.models.load_model("my_model_A")
model_B_on_A = tf.keras.Sequential(model_A.layers[:-1])
model_B_on_A.add(tf.keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
tf.random.set_seed(42) # extra code – ensure reproducibility
model_A_clone = tf.keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
# extra code – creating model_B_on_A just like in the previous cell
model_B_on_A = tf.keras.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(tf.keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model_B_on_A.compile(loss="binary_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model_B_on_A.compile(loss="binary_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 23ms/step - loss: 1.7893 - accuracy: 0.5550 - val_loss: 1.3324 - val_accuracy: 0.5084
Epoch 2/4
7/7 [==============================] - 0s 7ms/step - loss: 1.1235 - accuracy: 0.5350 - val_loss: 0.9199 - val_accuracy: 0.4807
Epoch 3/4
7/7 [==============================] - 0s 7ms/step - loss: 0.8836 - accuracy: 0.5000 - val_loss: 0.8266 - val_accuracy: 0.4837
Epoch 4/4
7/7 [==============================] - 0s 7ms/step - loss: 0.8202 - accuracy: 0.5250 - val_loss: 0.7795 - val_accuracy: 0.4985
Epoch 1/16
7/7 [==============================] - 0s 21ms/step - loss: 0.7348 - accuracy: 0.6050 - val_loss: 0.6372 - val_accuracy: 0.6914
Epoch 2/16
7/7 [==============================] - 0s 7ms/step - loss: 0.6055 - accuracy: 0.7600 - val_loss: 0.5283 - val_accuracy: 0.8229
Epoch 3/16
7/7 [==============================] - 0s 7ms/step - loss: 0.4992 - accuracy: 0.8400 - val_loss: 0.4742 - val_accuracy: 0.8180
Epoch 4/16
7/7 [==============================] - 0s 6ms/step - loss: 0.4297 - accuracy: 0.8700 - val_loss: 0.4212 - val_accuracy: 0.8773
Epoch 5/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3825 - accuracy: 0.9050 - val_loss: 0.3797 - val_accuracy: 0.9031
Epoch 6/16
7/7 [==============================] - 0s 6ms/step - loss: 0.3438 - accuracy: 0.9250 - val_loss: 0.3534 - val_accuracy: 0.9149
Epoch 7/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3148 - accuracy: 0.9500 - val_loss: 0.3384 - val_accuracy: 0.9001
Epoch 8/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3012 - accuracy: 0.9450 - val_loss: 0.3179 - val_accuracy: 0.9209
Epoch 9/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2767 - accuracy: 0.9650 - val_loss: 0.3043 - val_accuracy: 0.9298
Epoch 10/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2623 - accuracy: 0.9550 - val_loss: 0.2929 - val_accuracy: 0.9308
Epoch 11/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2512 - accuracy: 0.9600 - val_loss: 0.2830 - val_accuracy: 0.9327
Epoch 12/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2397 - accuracy: 0.9600 - val_loss: 0.2744 - val_accuracy: 0.9318
Epoch 13/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2295 - accuracy: 0.9600 - val_loss: 0.2675 - val_accuracy: 0.9327
Epoch 14/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2225 - accuracy: 0.9600 - val_loss: 0.2598 - val_accuracy: 0.9347
Epoch 15/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2147 - accuracy: 0.9600 - val_loss: 0.2542 - val_accuracy: 0.9357
Epoch 16/16
7/7 [==============================] - 0s 7ms/step - loss: 0.2077 - accuracy: 0.9600 - val_loss: 0.2492 - val_accuracy: 0.9377
###Markdown
So, what's the final verdict?
###Code
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 667us/step - loss: 0.2546 - accuracy: 0.9385
###Markdown
Great! We got a bit of transfer: the model's accuracy went up 2 percentage points, from 91.85% to 93.85%. This means the error rate dropped by almost 25%:
###Code
1 - (100 - 93.85) / (100 - 91.85)
###Output
_____no_output_____
###Markdown
Faster Optimizers
###Code
# extra code – a little function to test an optimizer on Fashion MNIST
def build_model(seed=42):
tf.random.set_seed(seed)
return tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(10, activation="softmax")
])
def build_and_train_model(optimizer):
model = build_model()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
return model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
history_sgd = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6877 - accuracy: 0.7677 - val_loss: 0.4960 - val_accuracy: 0.8172
Epoch 2/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.4619 - accuracy: 0.8378 - val_loss: 0.4421 - val_accuracy: 0.8404
Epoch 3/10
1719/1719 [==============================] - 1s 868us/step - loss: 0.4179 - accuracy: 0.8525 - val_loss: 0.4188 - val_accuracy: 0.8538
Epoch 4/10
1719/1719 [==============================] - 1s 866us/step - loss: 0.3902 - accuracy: 0.8621 - val_loss: 0.3814 - val_accuracy: 0.8604
Epoch 5/10
1719/1719 [==============================] - 1s 869us/step - loss: 0.3686 - accuracy: 0.8691 - val_loss: 0.3665 - val_accuracy: 0.8656
Epoch 6/10
1719/1719 [==============================] - 2s 925us/step - loss: 0.3553 - accuracy: 0.8732 - val_loss: 0.3643 - val_accuracy: 0.8720
Epoch 7/10
1719/1719 [==============================] - 2s 908us/step - loss: 0.3385 - accuracy: 0.8778 - val_loss: 0.3611 - val_accuracy: 0.8684
Epoch 8/10
1719/1719 [==============================] - 2s 926us/step - loss: 0.3297 - accuracy: 0.8796 - val_loss: 0.3490 - val_accuracy: 0.8726
Epoch 9/10
1719/1719 [==============================] - 2s 893us/step - loss: 0.3200 - accuracy: 0.8850 - val_loss: 0.3625 - val_accuracy: 0.8666
Epoch 10/10
1719/1719 [==============================] - 2s 886us/step - loss: 0.3097 - accuracy: 0.8881 - val_loss: 0.3656 - val_accuracy: 0.8672
###Markdown
Momentum optimization
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
history_momentum = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 941us/step - loss: 0.6877 - accuracy: 0.7677 - val_loss: 0.4960 - val_accuracy: 0.8172
Epoch 2/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.4619 - accuracy: 0.8378 - val_loss: 0.4421 - val_accuracy: 0.8404
Epoch 3/10
1719/1719 [==============================] - 2s 898us/step - loss: 0.4179 - accuracy: 0.8525 - val_loss: 0.4188 - val_accuracy: 0.8538
Epoch 4/10
1719/1719 [==============================] - 2s 934us/step - loss: 0.3902 - accuracy: 0.8621 - val_loss: 0.3814 - val_accuracy: 0.8604
Epoch 5/10
1719/1719 [==============================] - 2s 910us/step - loss: 0.3686 - accuracy: 0.8691 - val_loss: 0.3665 - val_accuracy: 0.8656
Epoch 6/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3553 - accuracy: 0.8732 - val_loss: 0.3643 - val_accuracy: 0.8720
Epoch 7/10
1719/1719 [==============================] - 2s 893us/step - loss: 0.3385 - accuracy: 0.8778 - val_loss: 0.3611 - val_accuracy: 0.8684
Epoch 8/10
1719/1719 [==============================] - 2s 968us/step - loss: 0.3297 - accuracy: 0.8796 - val_loss: 0.3490 - val_accuracy: 0.8726
Epoch 9/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3200 - accuracy: 0.8850 - val_loss: 0.3625 - val_accuracy: 0.8666
Epoch 10/10
1719/1719 [==============================] - 1s 858us/step - loss: 0.3097 - accuracy: 0.8881 - val_loss: 0.3656 - val_accuracy: 0.8672
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9,
nesterov=True)
history_nesterov = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 907us/step - loss: 0.6777 - accuracy: 0.7711 - val_loss: 0.4796 - val_accuracy: 0.8260
Epoch 2/10
1719/1719 [==============================] - 2s 898us/step - loss: 0.4570 - accuracy: 0.8398 - val_loss: 0.4358 - val_accuracy: 0.8396
Epoch 3/10
1719/1719 [==============================] - 1s 872us/step - loss: 0.4140 - accuracy: 0.8537 - val_loss: 0.4013 - val_accuracy: 0.8566
Epoch 4/10
1719/1719 [==============================] - 2s 902us/step - loss: 0.3882 - accuracy: 0.8629 - val_loss: 0.3802 - val_accuracy: 0.8616
Epoch 5/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3666 - accuracy: 0.8703 - val_loss: 0.3689 - val_accuracy: 0.8638
Epoch 6/10
1719/1719 [==============================] - 2s 882us/step - loss: 0.3531 - accuracy: 0.8732 - val_loss: 0.3681 - val_accuracy: 0.8688
Epoch 7/10
1719/1719 [==============================] - 2s 958us/step - loss: 0.3375 - accuracy: 0.8784 - val_loss: 0.3658 - val_accuracy: 0.8670
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.3278 - accuracy: 0.8815 - val_loss: 0.3598 - val_accuracy: 0.8682
Epoch 9/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.3183 - accuracy: 0.8855 - val_loss: 0.3472 - val_accuracy: 0.8720
Epoch 10/10
1719/1719 [==============================] - 2s 921us/step - loss: 0.3081 - accuracy: 0.8891 - val_loss: 0.3624 - val_accuracy: 0.8708
###Markdown
AdaGrad
###Code
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.001)
history_adagrad = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.0003 - accuracy: 0.6822 - val_loss: 0.6876 - val_accuracy: 0.7744
Epoch 2/10
1719/1719 [==============================] - 2s 912us/step - loss: 0.6389 - accuracy: 0.7904 - val_loss: 0.5837 - val_accuracy: 0.8048
Epoch 3/10
1719/1719 [==============================] - 2s 930us/step - loss: 0.5682 - accuracy: 0.8105 - val_loss: 0.5379 - val_accuracy: 0.8154
Epoch 4/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.5316 - accuracy: 0.8215 - val_loss: 0.5135 - val_accuracy: 0.8244
Epoch 5/10
1719/1719 [==============================] - 1s 855us/step - loss: 0.5076 - accuracy: 0.8295 - val_loss: 0.4937 - val_accuracy: 0.8288
Epoch 6/10
1719/1719 [==============================] - 1s 868us/step - loss: 0.4905 - accuracy: 0.8338 - val_loss: 0.4821 - val_accuracy: 0.8312
Epoch 7/10
1719/1719 [==============================] - 2s 940us/step - loss: 0.4776 - accuracy: 0.8371 - val_loss: 0.4705 - val_accuracy: 0.8348
Epoch 8/10
1719/1719 [==============================] - 2s 966us/step - loss: 0.4674 - accuracy: 0.8409 - val_loss: 0.4611 - val_accuracy: 0.8362
Epoch 9/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.4587 - accuracy: 0.8435 - val_loss: 0.4548 - val_accuracy: 0.8406
Epoch 10/10
1719/1719 [==============================] - 2s 873us/step - loss: 0.4511 - accuracy: 0.8458 - val_loss: 0.4469 - val_accuracy: 0.8424
###Markdown
RMSProp
###Code
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
history_rmsprop = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5138 - accuracy: 0.8135 - val_loss: 0.4413 - val_accuracy: 0.8338
Epoch 2/10
1719/1719 [==============================] - 2s 942us/step - loss: 0.3932 - accuracy: 0.8590 - val_loss: 0.4518 - val_accuracy: 0.8370
Epoch 3/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.3711 - accuracy: 0.8692 - val_loss: 0.3914 - val_accuracy: 0.8686
Epoch 4/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.3643 - accuracy: 0.8735 - val_loss: 0.4176 - val_accuracy: 0.8644
Epoch 5/10
1719/1719 [==============================] - 2s 970us/step - loss: 0.3578 - accuracy: 0.8769 - val_loss: 0.3874 - val_accuracy: 0.8696
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3561 - accuracy: 0.8775 - val_loss: 0.4650 - val_accuracy: 0.8590
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3528 - accuracy: 0.8783 - val_loss: 0.4122 - val_accuracy: 0.8774
Epoch 8/10
1719/1719 [==============================] - 2s 989us/step - loss: 0.3491 - accuracy: 0.8811 - val_loss: 0.5151 - val_accuracy: 0.8586
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3479 - accuracy: 0.8829 - val_loss: 0.4457 - val_accuracy: 0.8856
Epoch 10/10
1719/1719 [==============================] - 2s 1000us/step - loss: 0.3437 - accuracy: 0.8830 - val_loss: 0.4781 - val_accuracy: 0.8636
###Markdown
Adam Optimization
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_adam = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4949 - accuracy: 0.8220 - val_loss: 0.4110 - val_accuracy: 0.8428
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3727 - accuracy: 0.8637 - val_loss: 0.4153 - val_accuracy: 0.8370
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3372 - accuracy: 0.8756 - val_loss: 0.3600 - val_accuracy: 0.8708
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3126 - accuracy: 0.8833 - val_loss: 0.3498 - val_accuracy: 0.8760
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2965 - accuracy: 0.8901 - val_loss: 0.3264 - val_accuracy: 0.8794
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2821 - accuracy: 0.8947 - val_loss: 0.3295 - val_accuracy: 0.8782
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2672 - accuracy: 0.8993 - val_loss: 0.3473 - val_accuracy: 0.8790
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2587 - accuracy: 0.9020 - val_loss: 0.3230 - val_accuracy: 0.8818
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2500 - accuracy: 0.9057 - val_loss: 0.3676 - val_accuracy: 0.8744
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2428 - accuracy: 0.9073 - val_loss: 0.3879 - val_accuracy: 0.8696
###Markdown
**Adamax Optimization**
###Code
optimizer = tf.keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_adamax = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5327 - accuracy: 0.8151 - val_loss: 0.4402 - val_accuracy: 0.8340
Epoch 2/10
1719/1719 [==============================] - 2s 935us/step - loss: 0.3950 - accuracy: 0.8591 - val_loss: 0.3907 - val_accuracy: 0.8512
Epoch 3/10
1719/1719 [==============================] - 2s 933us/step - loss: 0.3563 - accuracy: 0.8715 - val_loss: 0.3730 - val_accuracy: 0.8676
Epoch 4/10
1719/1719 [==============================] - 2s 942us/step - loss: 0.3335 - accuracy: 0.8797 - val_loss: 0.3453 - val_accuracy: 0.8738
Epoch 5/10
1719/1719 [==============================] - 2s 993us/step - loss: 0.3129 - accuracy: 0.8853 - val_loss: 0.3270 - val_accuracy: 0.8792
Epoch 6/10
1719/1719 [==============================] - 2s 926us/step - loss: 0.2986 - accuracy: 0.8913 - val_loss: 0.3396 - val_accuracy: 0.8772
Epoch 7/10
1719/1719 [==============================] - 2s 939us/step - loss: 0.2854 - accuracy: 0.8949 - val_loss: 0.3390 - val_accuracy: 0.8770
Epoch 8/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.2757 - accuracy: 0.8984 - val_loss: 0.3147 - val_accuracy: 0.8854
Epoch 9/10
1719/1719 [==============================] - 2s 952us/step - loss: 0.2662 - accuracy: 0.9020 - val_loss: 0.3341 - val_accuracy: 0.8760
Epoch 10/10
1719/1719 [==============================] - 2s 957us/step - loss: 0.2542 - accuracy: 0.9063 - val_loss: 0.3282 - val_accuracy: 0.8780
###Markdown
**Nadam Optimization**
###Code
optimizer = tf.keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_nadam = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4826 - accuracy: 0.8284 - val_loss: 0.4092 - val_accuracy: 0.8456
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3610 - accuracy: 0.8667 - val_loss: 0.3893 - val_accuracy: 0.8592
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3270 - accuracy: 0.8784 - val_loss: 0.3653 - val_accuracy: 0.8712
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3049 - accuracy: 0.8874 - val_loss: 0.3444 - val_accuracy: 0.8726
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2897 - accuracy: 0.8905 - val_loss: 0.3174 - val_accuracy: 0.8810
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2753 - accuracy: 0.8981 - val_loss: 0.3389 - val_accuracy: 0.8830
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2652 - accuracy: 0.9000 - val_loss: 0.3725 - val_accuracy: 0.8734
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2563 - accuracy: 0.9034 - val_loss: 0.3229 - val_accuracy: 0.8828
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2463 - accuracy: 0.9079 - val_loss: 0.3353 - val_accuracy: 0.8818
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2402 - accuracy: 0.9091 - val_loss: 0.3813 - val_accuracy: 0.8740
###Markdown
**AdamW Optimization** On Colab or Kaggle, we need to install the TensorFlow-Addons library:
###Code
if "google.colab" in sys.modules or "kaggle_secrets" in sys.modules:
%pip install -q -U tensorflow-addons
import tensorflow_addons as tfa
optimizer = tfa.optimizers.AdamW(weight_decay=1e-5, learning_rate=0.001,
beta_1=0.9, beta_2=0.999)
history_adamw = build_and_train_model(optimizer) # extra code
# extra code – visualize the learning curves of all the optimizers
for loss in ("loss", "val_loss"):
plt.figure(figsize=(12, 8))
opt_names = "SGD Momentum Nesterov AdaGrad RMSProp Adam Adamax Nadam AdamW"
for history, opt_name in zip((history_sgd, history_momentum, history_nesterov,
history_adagrad, history_rmsprop, history_adam,
history_adamax, history_nadam, history_adamw),
opt_names.split()):
plt.plot(history.history[loss], label=f"{opt_name}", linewidth=3)
plt.grid()
plt.xlabel("Epochs")
plt.ylabel({"loss": "Training loss", "val_loss": "Validation loss"}[loss])
plt.legend(loc="upper left")
plt.axis([0, 9, 0.1, 0.7])
plt.show()
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
history_power_scheduling = build_and_train_model(optimizer) # extra code
# extra code – this cell plots power scheduling
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
n_epochs = 25
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1 ** (epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1 ** (epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1 ** (epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
# extra code – build and compile a model for Fashion MNIST
tf.random.set_seed(42)
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots exponential scheduling
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1 ** (1 / 20)
###Output
_____no_output_____
###Markdown
**Extra material**: if you want to update the learning rate at each iteration rather than at each epoch, you can write your own callback class:
###Code
K = tf.keras.backend
class ExponentialDecay(tf.keras.callbacks.Callback):
def __init__(self, n_steps=40_000):
super().__init__()
self.n_steps = n_steps
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
new_learning_rate = lr * 0.1 ** (1 / self.n_steps)
K.set_value(self.model.optimizer.learning_rate, new_learning_rate)
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
lr0 = 0.01
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 25
batch_size = 32
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
exp_decay = ExponentialDecay(n_steps)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
steps = np.arange(n_steps)
decay_rate = 0.1
lrs = lr0 * decay_rate ** (steps / n_steps)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
# extra code – this cell demonstrates a more general way to define
# piecewise constant scheduling.
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[(boundaries > epoch).argmax() - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
# extra code – use a tf.keras.callbacks.LearningRateScheduler like earlier
n_epochs = 25
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = build_model()
optimizer = tf.keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots piecewise constant scheduling
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
# extra code – build and compile the model
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
lr_scheduler = tf.keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots performance scheduling
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
import math
batch_size = 32
n_epochs = 25
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
scheduled_learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.01, decay_steps=n_steps, decay_rate=0.1)
optimizer = tf.keras.optimizers.SGD(learning_rate=scheduled_learning_rate)
# extra code – build and train the model
model = build_and_train_model(optimizer)
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 864us/step - loss: 0.6808 - accuracy: 0.7683 - val_loss: 0.4806 - val_accuracy: 0.8268
Epoch 2/10
1719/1719 [==============================] - 1s 812us/step - loss: 0.4686 - accuracy: 0.8359 - val_loss: 0.4420 - val_accuracy: 0.8408
Epoch 3/10
1719/1719 [==============================] - 1s 809us/step - loss: 0.4221 - accuracy: 0.8494 - val_loss: 0.4108 - val_accuracy: 0.8530
Epoch 4/10
1719/1719 [==============================] - 1s 828us/step - loss: 0.3976 - accuracy: 0.8592 - val_loss: 0.3867 - val_accuracy: 0.8582
Epoch 5/10
1719/1719 [==============================] - 1s 825us/step - loss: 0.3775 - accuracy: 0.8655 - val_loss: 0.3784 - val_accuracy: 0.8620
Epoch 6/10
1719/1719 [==============================] - 1s 817us/step - loss: 0.3633 - accuracy: 0.8705 - val_loss: 0.3796 - val_accuracy: 0.8624
Epoch 7/10
1719/1719 [==============================] - 1s 843us/step - loss: 0.3518 - accuracy: 0.8737 - val_loss: 0.3662 - val_accuracy: 0.8662
Epoch 8/10
1719/1719 [==============================] - 1s 805us/step - loss: 0.3422 - accuracy: 0.8779 - val_loss: 0.3707 - val_accuracy: 0.8628
Epoch 9/10
1719/1719 [==============================] - 1s 821us/step - loss: 0.3339 - accuracy: 0.8809 - val_loss: 0.3475 - val_accuracy: 0.8696
Epoch 10/10
1719/1719 [==============================] - 1s 829us/step - loss: 0.3266 - accuracy: 0.8826 - val_loss: 0.3473 - val_accuracy: 0.8710
###Markdown
For piecewise constant scheduling, try this:
###Code
# extra code – shows how to use PiecewiseConstantDecay
scheduled_learning_rate = tf.keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling The `ExponentialLearningRate` custom callback updates the learning rate during training, at the end of each batch. It multiplies it by a constant `factor`. It also saves the learning rate and loss at each batch. Since `logs["loss"]` is actually the mean loss since the start of the epoch, and we want to save the batch loss instead, we must compute the mean times the number of batches since the beginning of the epoch to get the total loss so far, then we subtract the total loss at the previous batch to get the current batch's loss.
###Code
K = tf.keras.backend
class ExponentialLearningRate(tf.keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_epoch_begin(self, epoch, logs=None):
self.sum_of_epoch_losses = 0
def on_batch_end(self, batch, logs=None):
mean_epoch_loss = logs["loss"] # the epoch's mean loss so far
new_sum_of_epoch_losses = mean_epoch_loss * (batch + 1)
batch_loss = new_sum_of_epoch_losses - self.sum_of_epoch_losses
self.sum_of_epoch_losses = new_sum_of_epoch_losses
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(batch_loss)
K.set_value(self.model.optimizer.learning_rate,
self.model.optimizer.learning_rate * self.factor)
###Output
_____no_output_____
###Markdown
The `find_learning_rate()` function trains the model using the `ExponentialLearningRate` callback, and it returns the learning rates and corresponding batch losses. At the end, it restores the model and its optimizer to their initial state.
###Code
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=1e-4,
max_rate=1):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = (max_rate / min_rate) ** (1 / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
###Output
_____no_output_____
###Markdown
The `plot_lr_vs_loss()` function plots the learning rates vs the losses. The optimal learning rate to use as the maximum learning rate in 1cycle is near the bottom of the curve.
###Code
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses, "b")
plt.gca().set_xscale('log')
max_loss = losses[0] + min(losses)
plt.hlines(min(losses), min(rates), max(rates), color="k")
plt.axis([min(rates), max(rates), 0, max_loss])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
plt.grid()
###Output
_____no_output_____
###Markdown
Let's build a simple Fashion MNIST model and compile it:
###Code
model = build_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's find the optimal max learning rate for 1cycle:
###Code
batch_size = 128
rates, losses = find_learning_rate(model, X_train, y_train, epochs=1,
batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
###Output
430/430 [==============================] - 1s 1ms/step - loss: 1.7725 - accuracy: 0.4122
###Markdown
Looks like the max learning rate to use for 1cycle is around 10–1. The `OneCycleScheduler` custom callback updates the learning rate at the beginning of each batch. It applies the logic described in the book: increase the learning rate linearly during about half of training, then reduce it linearly back to the initial learning rate, and lastly reduce it down to close to zero linearly for the very last part of training.
###Code
class OneCycleScheduler(tf.keras.callbacks.Callback):
def __init__(self, iterations, max_lr=1e-3, start_lr=None,
last_iterations=None, last_lr=None):
self.iterations = iterations
self.max_lr = max_lr
self.start_lr = start_lr or max_lr / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_lr = last_lr or self.start_lr / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, lr1, lr2):
return (lr2 - lr1) * (self.iteration - iter1) / (iter2 - iter1) + lr1
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
lr = self._interpolate(0, self.half_iteration, self.start_lr,
self.max_lr)
elif self.iteration < 2 * self.half_iteration:
lr = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_lr, self.start_lr)
else:
lr = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_lr, self.last_lr)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, lr)
###Output
_____no_output_____
###Markdown
Let's build and compile a simple Fashion MNIST model, then train it using the `OneCycleScheduler` callback:
###Code
model = build_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(),
metrics=["accuracy"])
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs,
max_lr=0.1)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.9502 - accuracy: 0.6913 - val_loss: 0.6003 - val_accuracy: 0.7874
Epoch 2/25
430/430 [==============================] - 1s 1ms/step - loss: 0.5695 - accuracy: 0.8025 - val_loss: 0.4918 - val_accuracy: 0.8248
Epoch 3/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4954 - accuracy: 0.8252 - val_loss: 0.4762 - val_accuracy: 0.8264
Epoch 4/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4515 - accuracy: 0.8402 - val_loss: 0.4261 - val_accuracy: 0.8478
Epoch 5/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4225 - accuracy: 0.8492 - val_loss: 0.4066 - val_accuracy: 0.8486
Epoch 6/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3958 - accuracy: 0.8571 - val_loss: 0.4787 - val_accuracy: 0.8224
Epoch 7/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3787 - accuracy: 0.8626 - val_loss: 0.3917 - val_accuracy: 0.8566
Epoch 8/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3630 - accuracy: 0.8683 - val_loss: 0.4719 - val_accuracy: 0.8296
Epoch 9/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3512 - accuracy: 0.8724 - val_loss: 0.3673 - val_accuracy: 0.8652
Epoch 10/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3360 - accuracy: 0.8766 - val_loss: 0.4957 - val_accuracy: 0.8466
Epoch 11/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3287 - accuracy: 0.8786 - val_loss: 0.4187 - val_accuracy: 0.8370
Epoch 12/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3173 - accuracy: 0.8815 - val_loss: 0.3425 - val_accuracy: 0.8728
Epoch 13/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2961 - accuracy: 0.8910 - val_loss: 0.3217 - val_accuracy: 0.8792
Epoch 14/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2818 - accuracy: 0.8958 - val_loss: 0.3734 - val_accuracy: 0.8692
Epoch 15/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2675 - accuracy: 0.9003 - val_loss: 0.3261 - val_accuracy: 0.8844
Epoch 16/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2558 - accuracy: 0.9055 - val_loss: 0.3205 - val_accuracy: 0.8820
Epoch 17/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2464 - accuracy: 0.9091 - val_loss: 0.3089 - val_accuracy: 0.8894
Epoch 18/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2368 - accuracy: 0.9115 - val_loss: 0.3130 - val_accuracy: 0.8870
Epoch 19/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2292 - accuracy: 0.9145 - val_loss: 0.3078 - val_accuracy: 0.8854
Epoch 20/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2205 - accuracy: 0.9186 - val_loss: 0.3092 - val_accuracy: 0.8886
Epoch 21/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2138 - accuracy: 0.9209 - val_loss: 0.3022 - val_accuracy: 0.8914
Epoch 22/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2073 - accuracy: 0.9232 - val_loss: 0.3054 - val_accuracy: 0.8914
Epoch 23/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2020 - accuracy: 0.9261 - val_loss: 0.3026 - val_accuracy: 0.8896
Epoch 24/25
430/430 [==============================] - 1s 1ms/step - loss: 0.1989 - accuracy: 0.9273 - val_loss: 0.3020 - val_accuracy: 0.8922
Epoch 25/25
430/430 [==============================] - 1s 1ms/step - loss: 0.1967 - accuracy: 0.9276 - val_loss: 0.3016 - val_accuracy: 0.8920
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal",
kernel_regularizer=tf.keras.regularizers.l2(0.01))
###Output
_____no_output_____
###Markdown
Or use `l1(0.1)` for ℓ1 regularization with a factor of 0.1, or `l1_l2(0.1, 0.01)` for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively.
###Code
tf.random.set_seed(42) # extra code – for reproducibility
from functools import partial
RegularizedDense = partial(tf.keras.layers.Dense,
activation="relu",
kernel_initializer="he_normal",
kernel_regularizer=tf.keras.regularizers.l2(0.01))
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(100),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
# extra code – compile and train the model
optimizer = tf.keras.optimizers.SGD(learning_rate=0.02)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=2,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 2s 878us/step - loss: 3.1224 - accuracy: 0.7748 - val_loss: 1.8602 - val_accuracy: 0.8264
Epoch 2/2
1719/1719 [==============================] - 1s 814us/step - loss: 1.4263 - accuracy: 0.8159 - val_loss: 1.1269 - val_accuracy: 0.8182
###Markdown
Dropout
###Code
tf.random.set_seed(42) # extra code – for reproducibility
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(10, activation="softmax")
])
# extra code – compile and train the model
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6703 - accuracy: 0.7536 - val_loss: 0.4498 - val_accuracy: 0.8342
Epoch 2/10
1719/1719 [==============================] - 2s 996us/step - loss: 0.5103 - accuracy: 0.8136 - val_loss: 0.4401 - val_accuracy: 0.8296
Epoch 3/10
1719/1719 [==============================] - 2s 998us/step - loss: 0.4712 - accuracy: 0.8263 - val_loss: 0.3806 - val_accuracy: 0.8554
Epoch 4/10
1719/1719 [==============================] - 2s 977us/step - loss: 0.4488 - accuracy: 0.8337 - val_loss: 0.3711 - val_accuracy: 0.8608
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4342 - accuracy: 0.8409 - val_loss: 0.3672 - val_accuracy: 0.8606
Epoch 6/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.4245 - accuracy: 0.8427 - val_loss: 0.3706 - val_accuracy: 0.8600
Epoch 7/10
1719/1719 [==============================] - 2s 995us/step - loss: 0.4131 - accuracy: 0.8467 - val_loss: 0.3582 - val_accuracy: 0.8650
Epoch 8/10
1719/1719 [==============================] - 2s 959us/step - loss: 0.4074 - accuracy: 0.8484 - val_loss: 0.3478 - val_accuracy: 0.8708
Epoch 9/10
1719/1719 [==============================] - 2s 997us/step - loss: 0.4024 - accuracy: 0.8533 - val_loss: 0.3556 - val_accuracy: 0.8690
Epoch 10/10
1719/1719 [==============================] - 2s 998us/step - loss: 0.3903 - accuracy: 0.8552 - val_loss: 0.3453 - val_accuracy: 0.8732
###Markdown
The training accuracy looks like it's lower than the validation accuracy, but that's just because dropout is only active during training. If we evaluate the model on the training set after training (i.e., with dropout turned off), we get the "real" training accuracy, which is very slightly higher than the validation accuracy and the test accuracy:
###Code
model.evaluate(X_train, y_train)
model.evaluate(X_test, y_test)
###Output
313/313 [==============================] - 0s 588us/step - loss: 0.3629 - accuracy: 0.8700
###Markdown
**Note**: make sure to use `AlphaDropout` instead of `Dropout` if you want to build a self-normalizing neural net using SELU. MC Dropout
###Code
tf.random.set_seed(42) # extra code – for reproducibility
y_probas = np.stack([model(X_test, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
model.predict(X_test[:1]).round(3)
y_proba[0].round(3)
y_std = y_probas.std(axis=0)
y_std[0].round(3)
y_pred = y_proba.argmax(axis=1)
accuracy = (y_pred == y_test).sum() / len(y_test)
accuracy
class MCDropout(tf.keras.layers.Dropout):
def call(self, inputs, training=None):
return super().call(inputs, training=True)
# extra code – shows how to convert Dropout to MCDropout in a Sequential model
Dropout = tf.keras.layers.Dropout
mc_model = tf.keras.Sequential([
MCDropout(layer.rate) if isinstance(layer, Dropout) else layer
for layer in model.layers
])
mc_model.set_weights(model.get_weights())
mc_model.summary()
###Output
Model: "sequential_25"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_22 (Flatten) (None, 784) 0
_________________________________________________________________
mc_dropout (MCDropout) (None, 784) 0
_________________________________________________________________
dense_89 (Dense) (None, 100) 78500
_________________________________________________________________
mc_dropout_1 (MCDropout) (None, 100) 0
_________________________________________________________________
dense_90 (Dense) (None, 100) 10100
_________________________________________________________________
mc_dropout_2 (MCDropout) (None, 100) 0
_________________________________________________________________
dense_91 (Dense) (None, 10) 1010
=================================================================
Total params: 89,610
Trainable params: 89,610
Non-trainable params: 0
_________________________________________________________________
###Markdown
Now we can use the model with MC Dropout:
###Code
# extra code – shows that the model works without retraining
tf.random.set_seed(42)
np.mean([mc_model.predict(X_test[:1])
for sample in range(100)], axis=0).round(2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
dense = tf.keras.layers.Dense(
100, activation="relu", kernel_initializer="he_normal",
kernel_constraint=tf.keras.constraints.max_norm(1.))
# extra code – shows how to apply max norm to every hidden layer in a model
MaxNormDense = partial(tf.keras.layers.Dense,
activation="relu", kernel_initializer="he_normal",
kernel_constraint=tf.keras.constraints.max_norm(1.))
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(100),
MaxNormDense(100),
tf.keras.layers.Dense(10, activation="softmax")
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5500 - accuracy: 0.8015 - val_loss: 0.4510 - val_accuracy: 0.8242
Epoch 2/10
1719/1719 [==============================] - 2s 960us/step - loss: 0.4089 - accuracy: 0.8499 - val_loss: 0.3956 - val_accuracy: 0.8504
Epoch 3/10
1719/1719 [==============================] - 2s 974us/step - loss: 0.3777 - accuracy: 0.8604 - val_loss: 0.3693 - val_accuracy: 0.8680
Epoch 4/10
1719/1719 [==============================] - 2s 943us/step - loss: 0.3581 - accuracy: 0.8690 - val_loss: 0.3517 - val_accuracy: 0.8716
Epoch 5/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.3416 - accuracy: 0.8729 - val_loss: 0.3433 - val_accuracy: 0.8682
Epoch 6/10
1719/1719 [==============================] - 2s 951us/step - loss: 0.3368 - accuracy: 0.8756 - val_loss: 0.4045 - val_accuracy: 0.8582
Epoch 7/10
1719/1719 [==============================] - 2s 935us/step - loss: 0.3293 - accuracy: 0.8767 - val_loss: 0.4168 - val_accuracy: 0.8476
Epoch 8/10
1719/1719 [==============================] - 2s 951us/step - loss: 0.3258 - accuracy: 0.8779 - val_loss: 0.3570 - val_accuracy: 0.8674
Epoch 9/10
1719/1719 [==============================] - 2s 970us/step - loss: 0.3269 - accuracy: 0.8787 - val_loss: 0.3702 - val_accuracy: 0.8578
Epoch 10/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.3169 - accuracy: 0.8809 - val_loss: 0.3907 - val_accuracy: 0.8578
###Markdown
Exercises 1. to 7. 1. Glorot initialization and He initialization were designed to make the output standard deviation as close as possible to the input standard deviation, at least at the beginning of training. This reduces the vanishing/exploding gradients problem.2. No, all weights should be sampled independently; they should not all have the same initial value. One important goal of sampling weights randomly is to break symmetry: if all the weights have the same initial value, even if that value is not zero, then symmetry is not broken (i.e., all neurons in a given layer are equivalent), and backpropagation will be unable to break it. Concretely, this means that all the neurons in any given layer will always have the same weights. It's like having just one neuron per layer, and much slower. It is virtually impossible for such a configuration to converge to a good solution.3. It is perfectly fine to initialize the bias terms to zero. Some people like to initialize them just like weights, and that's OK too; it does not make much difference.4. ReLU is usually a good default for the hidden layers, as it is fast and yields good results. Its ability to output precisely zero can also be useful in some cases (e.g., see Chapter 17). Moreover, it can sometimes benefit from optimized implementations as well as from hardware acceleration. The leaky ReLU variants of ReLU can improve the model's quality without hindering its speed too much compared to ReLU. For large neural nets and more complex problems, GLU, Swish and Mish can give you a slightly higher quality model, but they have a computational cost. The hyperbolic tangent (tanh) can be useful in the output layer if you need to output a number in a fixed range (by default between –1 and 1), but nowadays it is not used much in hidden layers, except in recurrent nets. The sigmoid activation function is also useful in the output layer when you need to estimate a probability (e.g., for binary classification), but it is rarely used in hidden layers (there are exceptions—for example, for the coding layer of variational autoencoders; see Chapter 17). The softplus activation function is useful in the output layer when you need to ensure that the output will always be positive. The softmax activation function is useful in the output layer to estimate probabilities for mutually exclusive classes, but it is rarely (if ever) used in hidden layers.5. If you set the `momentum` hyperparameter too close to 1 (e.g., 0.99999) when using an `SGD` optimizer, then the algorithm will likely pick up a lot of speed, hopefully moving roughly toward the global minimum, but its momentum will carry it right past the minimum. Then it will slow down and come back, accelerate again, overshoot again, and so on. It may oscillate this way many times before converging, so overall it will take much longer to converge than with a smaller `momentum` value.6. One way to produce a sparse model (i.e., with most weights equal to zero) is to train the model normally, then zero out tiny weights. For more sparsity, you can apply ℓ1 regularization during training, which pushes the optimizer toward sparsity. A third option is to use the TensorFlow Model Optimization Toolkit.7. Yes, dropout does slow down training, in general roughly by a factor of two. However, it has no impact on inference speed since it is only turned on during training. MC Dropout is exactly like dropout during training, but it is still active during inference, so each inference is slowed down slightly. More importantly, when using MC Dropout you generally want to run inference 10 times or more to get better predictions. This means that making predictions is slowed down by a factor of 10 or more. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the Swish activation function.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
activation="swish",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `tf.keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(tf.keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train_full, y_train_full), (X_test, y_test) = cifar10
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,
restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("my_cifar10_model",
save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%load_ext tensorboard
%tensorboard --logdir=./my_cifar10_logs
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 1.5062 - accuracy: 0.4676
###Markdown
The model with the lowest validation loss gets about 46.8% accuracy on the validation set. It took 29 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve the model using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to `my_cifar10_bn_model`.
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Activation("swish"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,
restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("my_cifar10_bn_model",
save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_bn_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1403/1407 [============================>.] - ETA: 0s - loss: 2.0377 - accuracy: 0.2523INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 32s 18ms/step - loss: 2.0374 - accuracy: 0.2525 - val_loss: 1.8766 - val_accuracy: 0.3154
Epoch 2/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.7874 - accuracy: 0.3542 - val_loss: 1.8784 - val_accuracy: 0.3268
Epoch 3/100
1407/1407 [==============================] - 20s 15ms/step - loss: 1.6806 - accuracy: 0.3969 - val_loss: 1.9764 - val_accuracy: 0.3252
Epoch 4/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.6111 - accuracy: 0.4229INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 24s 17ms/step - loss: 1.6112 - accuracy: 0.4228 - val_loss: 1.7087 - val_accuracy: 0.3750
Epoch 5/100
1402/1407 [============================>.] - ETA: 0s - loss: 1.5520 - accuracy: 0.4478INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 21s 15ms/step - loss: 1.5521 - accuracy: 0.4476 - val_loss: 1.6272 - val_accuracy: 0.4176
Epoch 6/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5030 - accuracy: 0.4659INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5030 - accuracy: 0.4660 - val_loss: 1.5401 - val_accuracy: 0.4452
Epoch 7/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.4559 - accuracy: 0.4812 - val_loss: 1.6990 - val_accuracy: 0.3952
Epoch 8/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.4169 - accuracy: 0.4987INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 21s 15ms/step - loss: 1.4168 - accuracy: 0.4987 - val_loss: 1.5078 - val_accuracy: 0.4652
Epoch 9/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.3863 - accuracy: 0.5123 - val_loss: 1.5513 - val_accuracy: 0.4470
Epoch 10/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.3514 - accuracy: 0.5216 - val_loss: 1.5208 - val_accuracy: 0.4562
Epoch 11/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.3220 - accuracy: 0.5314 - val_loss: 1.7301 - val_accuracy: 0.4206
Epoch 12/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.2933 - accuracy: 0.5410INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 25s 18ms/step - loss: 1.2931 - accuracy: 0.5410 - val_loss: 1.4909 - val_accuracy: 0.4734
Epoch 13/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.2702 - accuracy: 0.5490 - val_loss: 1.5256 - val_accuracy: 0.4636
Epoch 14/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.2424 - accuracy: 0.5591 - val_loss: 1.5569 - val_accuracy: 0.4624
Epoch 15/100
<<12 more lines>>
Epoch 21/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.1174 - accuracy: 0.6066 - val_loss: 1.5241 - val_accuracy: 0.4828
Epoch 22/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0978 - accuracy: 0.6128 - val_loss: 1.5313 - val_accuracy: 0.4772
Epoch 23/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.0844 - accuracy: 0.6198 - val_loss: 1.4993 - val_accuracy: 0.4924
Epoch 24/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.0677 - accuracy: 0.6244 - val_loss: 1.4622 - val_accuracy: 0.5078
Epoch 25/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0571 - accuracy: 0.6297 - val_loss: 1.4917 - val_accuracy: 0.4990
Epoch 26/100
1407/1407 [==============================] - 19s 14ms/step - loss: 1.0395 - accuracy: 0.6327 - val_loss: 1.4888 - val_accuracy: 0.4896
Epoch 27/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0298 - accuracy: 0.6370 - val_loss: 1.5358 - val_accuracy: 0.5024
Epoch 28/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0150 - accuracy: 0.6444 - val_loss: 1.5219 - val_accuracy: 0.5030
Epoch 29/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.0100 - accuracy: 0.6456 - val_loss: 1.4933 - val_accuracy: 0.5098
Epoch 30/100
1407/1407 [==============================] - 20s 14ms/step - loss: 0.9956 - accuracy: 0.6492 - val_loss: 1.4756 - val_accuracy: 0.5012
Epoch 31/100
1407/1407 [==============================] - 16s 11ms/step - loss: 0.9787 - accuracy: 0.6576 - val_loss: 1.5181 - val_accuracy: 0.4936
Epoch 32/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9710 - accuracy: 0.6565 - val_loss: 1.7510 - val_accuracy: 0.4568
Epoch 33/100
1407/1407 [==============================] - 20s 14ms/step - loss: 0.9613 - accuracy: 0.6628 - val_loss: 1.5576 - val_accuracy: 0.4910
Epoch 34/100
1407/1407 [==============================] - 19s 14ms/step - loss: 0.9530 - accuracy: 0.6651 - val_loss: 1.5087 - val_accuracy: 0.5046
Epoch 35/100
1407/1407 [==============================] - 19s 13ms/step - loss: 0.9388 - accuracy: 0.6701 - val_loss: 1.5534 - val_accuracy: 0.4950
Epoch 36/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9331 - accuracy: 0.6743 - val_loss: 1.5033 - val_accuracy: 0.5046
Epoch 37/100
1407/1407 [==============================] - 19s 14ms/step - loss: 0.9144 - accuracy: 0.6808 - val_loss: 1.5679 - val_accuracy: 0.5028
157/157 [==============================] - 0s 2ms/step - loss: 1.4236 - accuracy: 0.5074
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 29 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 12 epochs and continued to make progress until the 17th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 50.7% validation accuracy instead of 46.7%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 15s instead of 10s, because of the extra computations required by the BN layers. But overall the training time (wall time) to reach the best model was shortened by about 10%. d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=20, restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"my_cifar10_selu_model", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_selu_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.9386 - accuracy: 0.3045INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 20s 13ms/step - loss: 1.9385 - accuracy: 0.3046 - val_loss: 1.8175 - val_accuracy: 0.3510
Epoch 2/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.7241 - accuracy: 0.3869INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.7241 - accuracy: 0.3869 - val_loss: 1.7677 - val_accuracy: 0.3614
Epoch 3/100
1407/1407 [==============================] - ETA: 0s - loss: 1.6272 - accuracy: 0.4263INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 18s 13ms/step - loss: 1.6272 - accuracy: 0.4263 - val_loss: 1.6878 - val_accuracy: 0.4054
Epoch 4/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5644 - accuracy: 0.4492INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 18s 13ms/step - loss: 1.5643 - accuracy: 0.4492 - val_loss: 1.6589 - val_accuracy: 0.4304
Epoch 5/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.5080 - accuracy: 0.4712INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.5080 - accuracy: 0.4712 - val_loss: 1.5651 - val_accuracy: 0.4538
Epoch 6/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.4611 - accuracy: 0.4873INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.4613 - accuracy: 0.4872 - val_loss: 1.5305 - val_accuracy: 0.4678
Epoch 7/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.4174 - accuracy: 0.5077 - val_loss: 1.5346 - val_accuracy: 0.4558
Epoch 8/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.3781 - accuracy: 0.5175INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.3781 - accuracy: 0.5175 - val_loss: 1.4773 - val_accuracy: 0.4882
Epoch 9/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.3413 - accuracy: 0.5345 - val_loss: 1.5021 - val_accuracy: 0.4764
Epoch 10/100
1407/1407 [==============================] - 15s 10ms/step - loss: 1.3182 - accuracy: 0.5422 - val_loss: 1.5709 - val_accuracy: 0.4762
Epoch 11/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2832 - accuracy: 0.5571 - val_loss: 1.5345 - val_accuracy: 0.4868
Epoch 12/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2557 - accuracy: 0.5667 - val_loss: 1.5024 - val_accuracy: 0.4900
Epoch 13/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2373 - accuracy: 0.5710 - val_loss: 1.5114 - val_accuracy: 0.5028
Epoch 14/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.2071 - accuracy: 0.5846INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.2073 - accuracy: 0.5847 - val_loss: 1.4608 - val_accuracy: 0.5026
Epoch 15/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1843 - accuracy: 0.5940 - val_loss: 1.4962 - val_accuracy: 0.5038
Epoch 16/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.1617 - accuracy: 0.6026 - val_loss: 1.5255 - val_accuracy: 0.5062
Epoch 17/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1452 - accuracy: 0.6084 - val_loss: 1.5057 - val_accuracy: 0.5036
Epoch 18/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.1297 - accuracy: 0.6145 - val_loss: 1.5097 - val_accuracy: 0.5010
Epoch 19/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.1004 - accuracy: 0.6245 - val_loss: 1.5218 - val_accuracy: 0.5014
Epoch 20/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0971 - accuracy: 0.6304 - val_loss: 1.5253 - val_accuracy: 0.5090
Epoch 21/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.0670 - accuracy: 0.6345 - val_loss: 1.5006 - val_accuracy: 0.5034
Epoch 22/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.0544 - accuracy: 0.6407 - val_loss: 1.5244 - val_accuracy: 0.5010
Epoch 23/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0338 - accuracy: 0.6502 - val_loss: 1.5355 - val_accuracy: 0.5096
Epoch 24/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.0281 - accuracy: 0.6514 - val_loss: 1.5257 - val_accuracy: 0.5164
Epoch 25/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.4097 - accuracy: 0.6478 - val_loss: 1.8203 - val_accuracy: 0.3514
Epoch 26/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.3733 - accuracy: 0.5157 - val_loss: 1.5600 - val_accuracy: 0.4664
Epoch 27/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2032 - accuracy: 0.5814 - val_loss: 1.5367 - val_accuracy: 0.4944
Epoch 28/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1291 - accuracy: 0.6121 - val_loss: 1.5333 - val_accuracy: 0.4852
Epoch 29/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0734 - accuracy: 0.6317 - val_loss: 1.5475 - val_accuracy: 0.5032
Epoch 30/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0294 - accuracy: 0.6469 - val_loss: 1.5400 - val_accuracy: 0.5052
Epoch 31/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0081 - accuracy: 0.6605 - val_loss: 1.5617 - val_accuracy: 0.4856
Epoch 32/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0109 - accuracy: 0.6603 - val_loss: 1.5727 - val_accuracy: 0.5124
Epoch 33/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9646 - accuracy: 0.6762 - val_loss: 1.5333 - val_accuracy: 0.5174
Epoch 34/100
1407/1407 [==============================] - 16s 11ms/step - loss: 0.9597 - accuracy: 0.6789 - val_loss: 1.5601 - val_accuracy: 0.5016
157/157 [==============================] - 0s 1ms/step - loss: 1.4608 - accuracy: 0.5026
###Markdown
This model reached the first model's validation loss in just 8 epochs. After 14 epochs, it reached its lowest validation loss, with about 50.3% accuracy, which is better than the original model (46.7%), but not quite as good as the model using batch normalization (50.7%). Each epoch took only 9 seconds. So it's the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=20, restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"my_cifar10_alpha_dropout_model", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_alpha_dropout_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.8953 - accuracy: 0.3240INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 18s 11ms/step - loss: 1.8950 - accuracy: 0.3239 - val_loss: 1.7556 - val_accuracy: 0.3812
Epoch 2/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.6618 - accuracy: 0.4129INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.6618 - accuracy: 0.4130 - val_loss: 1.6563 - val_accuracy: 0.4114
Epoch 3/100
1402/1407 [============================>.] - ETA: 0s - loss: 1.5772 - accuracy: 0.4431INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.5770 - accuracy: 0.4432 - val_loss: 1.6507 - val_accuracy: 0.4232
Epoch 4/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5081 - accuracy: 0.4673INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 15s 10ms/step - loss: 1.5081 - accuracy: 0.4672 - val_loss: 1.5892 - val_accuracy: 0.4566
Epoch 5/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.4560 - accuracy: 0.4902INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 14s 10ms/step - loss: 1.4561 - accuracy: 0.4902 - val_loss: 1.5382 - val_accuracy: 0.4696
Epoch 6/100
1401/1407 [============================>.] - ETA: 0s - loss: 1.4095 - accuracy: 0.5050INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.4094 - accuracy: 0.5050 - val_loss: 1.5236 - val_accuracy: 0.4818
Epoch 7/100
1401/1407 [============================>.] - ETA: 0s - loss: 1.3634 - accuracy: 0.5234INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 14s 10ms/step - loss: 1.3636 - accuracy: 0.5232 - val_loss: 1.5139 - val_accuracy: 0.4840
Epoch 8/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.3297 - accuracy: 0.5377INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 15s 11ms/step - loss: 1.3296 - accuracy: 0.5378 - val_loss: 1.4780 - val_accuracy: 0.4982
Epoch 9/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2907 - accuracy: 0.5485 - val_loss: 1.5151 - val_accuracy: 0.4854
Epoch 10/100
1407/1407 [==============================] - 13s 10ms/step - loss: 1.2559 - accuracy: 0.5646 - val_loss: 1.4980 - val_accuracy: 0.4976
Epoch 11/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.2221 - accuracy: 0.5767 - val_loss: 1.5199 - val_accuracy: 0.4990
Epoch 12/100
1407/1407 [==============================] - 13s 9ms/step - loss: 1.1960 - accuracy: 0.5870 - val_loss: 1.5167 - val_accuracy: 0.5030
Epoch 13/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.1684 - accuracy: 0.5955 - val_loss: 1.5815 - val_accuracy: 0.5014
Epoch 14/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.1463 - accuracy: 0.6025 - val_loss: 1.5427 - val_accuracy: 0.5112
Epoch 15/100
1407/1407 [==============================] - 13s 9ms/step - loss: 1.1125 - accuracy: 0.6169 - val_loss: 1.5868 - val_accuracy: 0.5212
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0854 - accuracy: 0.6243 - val_loss: 1.6234 - val_accuracy: 0.5090
Epoch 17/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0668 - accuracy: 0.6328 - val_loss: 1.6162 - val_accuracy: 0.5072
Epoch 18/100
1407/1407 [==============================] - 15s 10ms/step - loss: 1.0440 - accuracy: 0.6442 - val_loss: 1.5748 - val_accuracy: 0.5162
Epoch 19/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0272 - accuracy: 0.6477 - val_loss: 1.6518 - val_accuracy: 0.5200
Epoch 20/100
1407/1407 [==============================] - 13s 10ms/step - loss: 1.0007 - accuracy: 0.6594 - val_loss: 1.6224 - val_accuracy: 0.5186
Epoch 21/100
1407/1407 [==============================] - 15s 10ms/step - loss: 0.9824 - accuracy: 0.6639 - val_loss: 1.6972 - val_accuracy: 0.5136
Epoch 22/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9660 - accuracy: 0.6714 - val_loss: 1.7210 - val_accuracy: 0.5278
Epoch 23/100
1407/1407 [==============================] - 13s 10ms/step - loss: 0.9472 - accuracy: 0.6780 - val_loss: 1.6436 - val_accuracy: 0.5006
Epoch 24/100
1407/1407 [==============================] - 14s 10ms/step - loss: 0.9314 - accuracy: 0.6819 - val_loss: 1.7059 - val_accuracy: 0.5160
Epoch 25/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9172 - accuracy: 0.6888 - val_loss: 1.6926 - val_accuracy: 0.5200
Epoch 26/100
1407/1407 [==============================] - 14s 10ms/step - loss: 0.8990 - accuracy: 0.6947 - val_loss: 1.7705 - val_accuracy: 0.5148
Epoch 27/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.8758 - accuracy: 0.7028 - val_loss: 1.7023 - val_accuracy: 0.5198
Epoch 28/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.8622 - accuracy: 0.7090 - val_loss: 1.7567 - val_accuracy: 0.5184
157/157 [==============================] - 0s 1ms/step - loss: 1.4780 - accuracy: 0.4982
###Markdown
The model reaches 48.1% accuracy on the validation set. That's worse than without dropout (50.3%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(tf.keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = tf.keras.Sequential([
(
MCAlphaDropout(layer.rate)
if isinstance(layer, tf.keras.layers.AlphaDropout)
else layer
)
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return Y_probas.argmax(axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
tf.random.set_seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = (y_pred == y_valid[:, 0]).mean()
accuracy
###Output
_____no_output_____
###Markdown
We get back to roughly the accuracy of the model without dropout in this case (about 50.3% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.SGD()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1,
batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.SGD(learning_rate=2e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
n_iterations = math.ceil(len(X_train_scaled) / batch_size) * n_epochs
onecycle = OneCycleScheduler(n_iterations, max_lr=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 9ms/step - loss: 2.0559 - accuracy: 0.2839 - val_loss: 1.7917 - val_accuracy: 0.3768
Epoch 2/15
352/352 [==============================] - 3s 8ms/step - loss: 1.7596 - accuracy: 0.3797 - val_loss: 1.6566 - val_accuracy: 0.4258
Epoch 3/15
352/352 [==============================] - 3s 8ms/step - loss: 1.6199 - accuracy: 0.4247 - val_loss: 1.6395 - val_accuracy: 0.4260
Epoch 4/15
352/352 [==============================] - 3s 9ms/step - loss: 1.5451 - accuracy: 0.4524 - val_loss: 1.6202 - val_accuracy: 0.4408
Epoch 5/15
352/352 [==============================] - 3s 8ms/step - loss: 1.4952 - accuracy: 0.4691 - val_loss: 1.5981 - val_accuracy: 0.4488
Epoch 6/15
352/352 [==============================] - 3s 9ms/step - loss: 1.4541 - accuracy: 0.4842 - val_loss: 1.5720 - val_accuracy: 0.4490
Epoch 7/15
352/352 [==============================] - 3s 9ms/step - loss: 1.4171 - accuracy: 0.4967 - val_loss: 1.6035 - val_accuracy: 0.4470
Epoch 8/15
352/352 [==============================] - 3s 9ms/step - loss: 1.3497 - accuracy: 0.5194 - val_loss: 1.4918 - val_accuracy: 0.4864
Epoch 9/15
352/352 [==============================] - 3s 9ms/step - loss: 1.2788 - accuracy: 0.5459 - val_loss: 1.5597 - val_accuracy: 0.4672
Epoch 10/15
352/352 [==============================] - 3s 9ms/step - loss: 1.2070 - accuracy: 0.5707 - val_loss: 1.5845 - val_accuracy: 0.4864
Epoch 11/15
352/352 [==============================] - 3s 10ms/step - loss: 1.1433 - accuracy: 0.5926 - val_loss: 1.5293 - val_accuracy: 0.4998
Epoch 12/15
352/352 [==============================] - 3s 9ms/step - loss: 1.0745 - accuracy: 0.6182 - val_loss: 1.5118 - val_accuracy: 0.5072
Epoch 13/15
352/352 [==============================] - 3s 10ms/step - loss: 1.0030 - accuracy: 0.6413 - val_loss: 1.5388 - val_accuracy: 0.5204
Epoch 14/15
352/352 [==============================] - 3s 10ms/step - loss: 0.9388 - accuracy: 0.6654 - val_loss: 1.5547 - val_accuracy: 0.5210
Epoch 15/15
352/352 [==============================] - 3s 9ms/step - loss: 0.8989 - accuracy: 0.6805 - val_loss: 1.5835 - val_accuracy: 0.5242
###Markdown
Adam 옵티마이저
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax 옵티마이저
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam 옵티마이저
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
학습률 스케줄링 거듭제곱 스케줄링 ```lr = lr0 / (1 + steps / s)**c```* 케라스는 `c=1`과 `s = 1 / decay`을 사용합니다
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
지수 기반 스케줄링 ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
이 스케줄 함수는 두 번째 매개변수로 현재 학습률을 받을 수 있습니다:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
에포크가 아니라 반복마다 학습률을 업데이트하려면 사용자 정의 콜백 클래스를 작성해야 합니다:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# 노트: 에포크마다 `batch` 매개변수가 재설정됩니다
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # 20 에포크 동안 스텝 횟수 (배치 크기 = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
기간별 고정 스케줄링
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
성능 기반 스케줄링
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras 스케줄러
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4894 - accuracy: 0.8277 - val_loss: 0.4096 - val_accuracy: 0.8592
Epoch 2/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3820 - accuracy: 0.8650 - val_loss: 0.3742 - val_accuracy: 0.8700
Epoch 3/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3487 - accuracy: 0.8767 - val_loss: 0.3736 - val_accuracy: 0.8686
Epoch 4/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3265 - accuracy: 0.8838 - val_loss: 0.3496 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3105 - accuracy: 0.8899 - val_loss: 0.3434 - val_accuracy: 0.8800
Epoch 6/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2959 - accuracy: 0.8950 - val_loss: 0.3415 - val_accuracy: 0.8808
Epoch 7/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2855 - accuracy: 0.8987 - val_loss: 0.3354 - val_accuracy: 0.8818
Epoch 8/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2761 - accuracy: 0.9016 - val_loss: 0.3366 - val_accuracy: 0.8810
Epoch 9/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2678 - accuracy: 0.9053 - val_loss: 0.3265 - val_accuracy: 0.8852
Epoch 10/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2608 - accuracy: 0.9069 - val_loss: 0.3240 - val_accuracy: 0.8848
Epoch 11/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2551 - accuracy: 0.9088 - val_loss: 0.3251 - val_accuracy: 0.8868
Epoch 12/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2497 - accuracy: 0.9126 - val_loss: 0.3302 - val_accuracy: 0.8810
Epoch 13/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2449 - accuracy: 0.9136 - val_loss: 0.3218 - val_accuracy: 0.8872
Epoch 14/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2415 - accuracy: 0.9147 - val_loss: 0.3222 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2375 - accuracy: 0.9167 - val_loss: 0.3208 - val_accuracy: 0.8876
Epoch 16/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2343 - accuracy: 0.9179 - val_loss: 0.3185 - val_accuracy: 0.8882
Epoch 17/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2317 - accuracy: 0.9186 - val_loss: 0.3198 - val_accuracy: 0.8890
Epoch 18/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2291 - accuracy: 0.9199 - val_loss: 0.3169 - val_accuracy: 0.8904
Epoch 19/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2269 - accuracy: 0.9206 - val_loss: 0.3197 - val_accuracy: 0.8888
Epoch 20/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2250 - accuracy: 0.9220 - val_loss: 0.3169 - val_accuracy: 0.8902
Epoch 21/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2229 - accuracy: 0.9224 - val_loss: 0.3180 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2216 - accuracy: 0.9225 - val_loss: 0.3163 - val_accuracy: 0.8912
Epoch 23/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2201 - accuracy: 0.9233 - val_loss: 0.3171 - val_accuracy: 0.8906
Epoch 24/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2188 - accuracy: 0.9243 - val_loss: 0.3166 - val_accuracy: 0.8908
Epoch 25/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2179 - accuracy: 0.9243 - val_loss: 0.3165 - val_accuracy: 0.8904
###Markdown
구간별 고정 스케줄링은 다음을 사용하세요:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1사이클 스케줄링
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 2s 4ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 3ms/step - loss: 0.4581 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8524
Epoch 3/25
430/430 [==============================] - 1s 3ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3837 - accuracy: 0.8641 - val_loss: 0.3870 - val_accuracy: 0.8686
Epoch 5/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3639 - accuracy: 0.8717 - val_loss: 0.3765 - val_accuracy: 0.8676
Epoch 6/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3457 - accuracy: 0.8774 - val_loss: 0.3742 - val_accuracy: 0.8708
Epoch 7/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3634 - val_accuracy: 0.8704
Epoch 8/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3185 - accuracy: 0.8862 - val_loss: 0.3958 - val_accuracy: 0.8608
Epoch 9/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3065 - accuracy: 0.8893 - val_loss: 0.3483 - val_accuracy: 0.8762
Epoch 10/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2945 - accuracy: 0.8924 - val_loss: 0.3396 - val_accuracy: 0.8812
Epoch 11/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3460 - val_accuracy: 0.8796
Epoch 12/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2709 - accuracy: 0.9023 - val_loss: 0.3644 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2536 - accuracy: 0.9081 - val_loss: 0.3350 - val_accuracy: 0.8838
Epoch 14/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2405 - accuracy: 0.9134 - val_loss: 0.3466 - val_accuracy: 0.8812
Epoch 15/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2280 - accuracy: 0.9183 - val_loss: 0.3260 - val_accuracy: 0.8840
Epoch 16/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2160 - accuracy: 0.9234 - val_loss: 0.3292 - val_accuracy: 0.8834
Epoch 17/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2062 - accuracy: 0.9264 - val_loss: 0.3354 - val_accuracy: 0.8862
Epoch 18/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1978 - accuracy: 0.9305 - val_loss: 0.3236 - val_accuracy: 0.8906
Epoch 19/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8904
Epoch 20/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1821 - accuracy: 0.9369 - val_loss: 0.3221 - val_accuracy: 0.8926
Epoch 21/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1752 - accuracy: 0.9401 - val_loss: 0.3215 - val_accuracy: 0.8904
Epoch 22/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1701 - accuracy: 0.9418 - val_loss: 0.3180 - val_accuracy: 0.8956
Epoch 23/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3186 - val_accuracy: 0.8942
Epoch 24/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1628 - accuracy: 0.9458 - val_loss: 0.3176 - val_accuracy: 0.8924
Epoch 25/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1611 - accuracy: 0.9460 - val_loss: 0.3169 - val_accuracy: 0.8930
###Markdown
규제를 사용해 과대적합 피하기 $\ell_1$과 $\ell_2$ 규제
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 1.6313 - accuracy: 0.8113 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7187 - accuracy: 0.8273 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
드롭아웃
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5838 - accuracy: 0.7998 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4209 - accuracy: 0.8443 - val_loss: 0.3406 - val_accuracy: 0.8724
###Markdown
알파 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4167 - accuracy: 0.8463
###Markdown
MC 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
이제 MC 드롭아웃을 모델에 사용할 수 있습니다:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
맥스 노름
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4749 - accuracy: 0.8337 - val_loss: 0.3665 - val_accuracy: 0.8676
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.3539 - accuracy: 0.8703 - val_loss: 0.3700 - val_accuracy: 0.8672
###Markdown
연습문제 해답 1. to 7. 부록 A 참조. 8. CIFAR10에서 딥러닝 a.*문제: 100개의 뉴런을 가진 은닉층 20개로 심층 신경망을 만들어보세요(너무 많은 것 같지만 이 연습문제의 핵심입니다). He 초기화와 ELU 활성화 함수를 사용하세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*문제: Nadam 옵티마이저와 조기 종료를 사용하여 CIFAR10 데이터셋에 이 네트워크를 훈련하세요. `keras.datasets.cifar10.load_ data()`를 사용하여 데이터를 적재할 수 있습니다. 이 데이터셋은 10개의 클래스와 32×32 크기의 컬러 이미지 60,000개로 구성됩니다(50,000개는 훈련, 10,000개는 테스트). 따라서 10개의 뉴런과 소프트맥스 활성화 함수를 사용하는 출력층이 필요합니다. 모델 구조와 하이퍼파라미터를 바꿀 때마다 적절한 학습률을 찾아야 한다는 것을 기억하세요.* 모델에 출력층을 추가합니다:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
학습률 5e-5인 Nadam 옵티마이저를 사용해 보죠. 학습률 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3, 1e-2를 테스트하고 10번의 에포크 동안 (아래 텐서보드 콜백으로) 학습 곡선을 비교해 보았습니다. 학습률 3e-5와 1e-4가 꽤 좋았기 때문에 5e-5를 시도해 보았고 조금 더 나은 결과를 냈습니다.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
CIFAR10 데이터셋을 로드하죠. 조기 종료를 사용하기 때문에 검증 세트가 필요합니다. 원본 훈련 세트에서 처음 5,000개를 검증 세트로 사용하겠습니다:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 18s 0us/step
###Markdown
이제 콜백을 만들고 모델을 훈련합니다:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 3ms/step - loss: 1.5014 - accuracy: 0.0882
###Markdown
가장 낮은 검증 손실을 내는 모델은 검증 세트에서 약 47% 정확도를 얻었습니다. 이 검증 점수에 도달하는데 39번의 에포크가 걸렸습니다. (GPU가 없는) 제 노트북에서 에포크당 약 10초 정도 걸렸습니다. 배치 정규화를 사용해 성능을 올릴 수 있는지 확인해 보죠. c.*문제: 배치 정규화를 추가하고 학습 곡선을 비교해보세요. 이전보다 빠르게 수렴하나요? 더 좋은 모델이 만들어지나요? 훈련 속도에는 어떤 영향을 미치나요?* 다음 코드는 위의 코드와 배우 비슷합니다. 몇 가지 다른 점은 아래와 같습니다:* 출력층을 제외하고 모든 `Dense` 층 다음에 (활성화 함수 전에) BN 층을 추가했습니다. 처음 은닉층 전에도 BN 층을 추가했습니다.* 학습률을 5e-4로 바꾸었습니다. 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3, 3e-3를 시도해 보고 20번 에포크 후에 검증 세트 성능이 가장 좋은 것을 선택했습니다.* run_logdir를 run_bn_* 으로 이름을 바꾸고 모델 파일 이름을 my_cifar10_bn_model.h5로 변경했습니다.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
2/1407 [..............................] - ETA: 9:29 - loss: 2.8693 - accuracy: 0.1094WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0364s vs `on_train_batch_end` time: 0.7737s). Check your callbacks.
1407/1407 [==============================] - 51s 36ms/step - loss: 1.8431 - accuracy: 0.3390 - val_loss: 1.7148 - val_accuracy: 0.3886
Epoch 2/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.6690 - accuracy: 0.4046 - val_loss: 1.6174 - val_accuracy: 0.4144
Epoch 3/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.5972 - accuracy: 0.4320 - val_loss: 1.5171 - val_accuracy: 0.4478
Epoch 4/100
1407/1407 [==============================] - 50s 35ms/step - loss: 1.5463 - accuracy: 0.4495 - val_loss: 1.4883 - val_accuracy: 0.4688
Epoch 5/100
1407/1407 [==============================] - 50s 35ms/step - loss: 1.5051 - accuracy: 0.4641 - val_loss: 1.4369 - val_accuracy: 0.4892
Epoch 6/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4684 - accuracy: 0.4793 - val_loss: 1.4056 - val_accuracy: 0.5018
Epoch 7/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4350 - accuracy: 0.4895 - val_loss: 1.4292 - val_accuracy: 0.4888
Epoch 8/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4087 - accuracy: 0.5006 - val_loss: 1.4021 - val_accuracy: 0.5088
Epoch 9/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3834 - accuracy: 0.5095 - val_loss: 1.3738 - val_accuracy: 0.5110
Epoch 10/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3645 - accuracy: 0.5167 - val_loss: 1.3432 - val_accuracy: 0.5252
Epoch 11/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3428 - accuracy: 0.5258 - val_loss: 1.3583 - val_accuracy: 0.5132
Epoch 12/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3227 - accuracy: 0.5316 - val_loss: 1.3820 - val_accuracy: 0.5052
Epoch 13/100
1407/1407 [==============================] - 48s 34ms/step - loss: 1.3010 - accuracy: 0.5371 - val_loss: 1.3794 - val_accuracy: 0.5094
Epoch 14/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2838 - accuracy: 0.5446 - val_loss: 1.3531 - val_accuracy: 0.5260
Epoch 15/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2621 - accuracy: 0.5548 - val_loss: 1.3641 - val_accuracy: 0.5256
Epoch 16/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2535 - accuracy: 0.5572 - val_loss: 1.3720 - val_accuracy: 0.5276
Epoch 17/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2355 - accuracy: 0.5609 - val_loss: 1.3184 - val_accuracy: 0.5348
Epoch 18/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2164 - accuracy: 0.5685 - val_loss: 1.3487 - val_accuracy: 0.5296
Epoch 19/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2037 - accuracy: 0.5770 - val_loss: 1.3278 - val_accuracy: 0.5366
Epoch 20/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1916 - accuracy: 0.5789 - val_loss: 1.3592 - val_accuracy: 0.5260
Epoch 21/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1782 - accuracy: 0.5848 - val_loss: 1.3478 - val_accuracy: 0.5302
Epoch 22/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1587 - accuracy: 0.5913 - val_loss: 1.3477 - val_accuracy: 0.5308
Epoch 23/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1481 - accuracy: 0.5933 - val_loss: 1.3285 - val_accuracy: 0.5378
Epoch 24/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1395 - accuracy: 0.5989 - val_loss: 1.3393 - val_accuracy: 0.5388
Epoch 25/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1285 - accuracy: 0.6044 - val_loss: 1.3436 - val_accuracy: 0.5354
Epoch 26/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1080 - accuracy: 0.6085 - val_loss: 1.3496 - val_accuracy: 0.5258
Epoch 27/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0971 - accuracy: 0.6143 - val_loss: 1.3484 - val_accuracy: 0.5350
Epoch 28/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0978 - accuracy: 0.6121 - val_loss: 1.3698 - val_accuracy: 0.5274
Epoch 29/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0825 - accuracy: 0.6198 - val_loss: 1.3416 - val_accuracy: 0.5348
Epoch 30/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0698 - accuracy: 0.6219 - val_loss: 1.3363 - val_accuracy: 0.5366
Epoch 31/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0569 - accuracy: 0.6262 - val_loss: 1.3536 - val_accuracy: 0.5356
Epoch 32/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0489 - accuracy: 0.6306 - val_loss: 1.3822 - val_accuracy: 0.5220
Epoch 33/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0387 - accuracy: 0.6338 - val_loss: 1.3633 - val_accuracy: 0.5404
Epoch 34/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0342 - accuracy: 0.6344 - val_loss: 1.3611 - val_accuracy: 0.5364
Epoch 35/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0163 - accuracy: 0.6422 - val_loss: 1.3904 - val_accuracy: 0.5356
Epoch 36/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0137 - accuracy: 0.6421 - val_loss: 1.3795 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 49s 35ms/step - loss: 0.9991 - accuracy: 0.6491 - val_loss: 1.3334 - val_accuracy: 0.5444
157/157 [==============================] - 1s 5ms/step - loss: 1.3184 - accuracy: 0.1154
###Markdown
* *이전보다 빠르게 수렴하나요?* 훨씬 빠릅니다! 이전 모델은 가장 낮은 검증 손실에 도달하기 위해 39 에포크가 걸렸지만 BN을 사용한 새 모델은 18 에포크가 걸렸습니다. 이전 모델보다 두 배 이상 빠릅니다. BN 층은 훈련을 안정적으로 수행하고 더 큰 학습률을 사용할 수 있기 때문에 수렴이 빨라졌습니다.* *BN이 더 좋은 모델을 만드나요?* 네! 최종 모델의 성능이 47%가 아니라 55% 정확도로 더 좋습니다. 이는 아주 좋은 모델이 아니지만 적어도 이전보다는 낫습니다(합성곱 신경망이 더 낫겠지만 이는 다른 주제입니다. 14장을 참고하세요).* *BN이 훈련 속도에 영향을 미치나요?* 모델이 두 배나 빠르게 수렴했지만 각 에포크는 10초가 아니라 16초가 걸렸습니다. BN 층에서 추가된 계산 때문입니다. 따라서 전체적으로 에포크 횟수가 50% 정도 줄었지만 훈련 시간(탁상 시계 시간)은 30% 정도 줄었습니다. 결국 크게 향상되었습니다! d.*문제: 배치 정규화를 SELU로 바꾸어보세요. 네트워크가 자기 정규화하기 위해 필요한 변경 사항을 적용해보세요(즉, 입력 특성 표준화, 르쿤 정규분포 초기화, 완전 연결 층만 순차적으로 쌓은 심층 신경망 등).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 1s 3ms/step - loss: 1.4753 - accuracy: 0.1256
###Markdown
51.4% 정확도를 얻었습니다. 원래 모델보다 더 좋습니다. 하지만 배치 정규화를 사용한 모델만큼 좋지는 않습니다. 최고의 모델에 도달하는데 13 에포크가 걸렸습니다. 이는 원본 모델이나 BN 모델보다 더 빠른 것입니다. 각 에포크는 원본 모델처럼 10초만 걸렸습니다. 따라서 이 모델이 지금까지 가장 빠른 모델입니다(에포크와 탁상 시계 기준으로). e.*문제: 알파 드롭아웃으로 모델에 규제를 적용해보세요. 그다음 모델을 다시 훈련하지 않고 MC 드롭아웃으로 더 높은 정확도를 얻을 수 있는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
2/1407 [..............................] - ETA: 4:07 - loss: 2.9857 - accuracy: 0.0938WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0168s vs `on_train_batch_end` time: 0.3359s). Check your callbacks.
1407/1407 [==============================] - 23s 17ms/step - loss: 1.8896 - accuracy: 0.3275 - val_loss: 1.7313 - val_accuracy: 0.3970
Epoch 2/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.6589 - accuracy: 0.4157 - val_loss: 1.7183 - val_accuracy: 0.3916
Epoch 3/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5727 - accuracy: 0.4479 - val_loss: 1.6073 - val_accuracy: 0.4364
Epoch 4/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5085 - accuracy: 0.4734 - val_loss: 1.5741 - val_accuracy: 0.4524
Epoch 5/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.4525 - accuracy: 0.4946 - val_loss: 1.5663 - val_accuracy: 0.4592
Epoch 6/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.4032 - accuracy: 0.5124 - val_loss: 1.5255 - val_accuracy: 0.4644
Epoch 7/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.3581 - accuracy: 0.5255 - val_loss: 1.6598 - val_accuracy: 0.4662
Epoch 8/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.3209 - accuracy: 0.5400 - val_loss: 1.5027 - val_accuracy: 0.5002
Epoch 9/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2845 - accuracy: 0.5562 - val_loss: 1.5246 - val_accuracy: 0.4896
Epoch 10/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2526 - accuracy: 0.5659 - val_loss: 1.5510 - val_accuracy: 0.4956
Epoch 11/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2160 - accuracy: 0.5808 - val_loss: 1.5559 - val_accuracy: 0.5002
Epoch 12/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1902 - accuracy: 0.5900 - val_loss: 1.5478 - val_accuracy: 0.4968
Epoch 13/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1602 - accuracy: 0.6021 - val_loss: 1.5727 - val_accuracy: 0.5124
Epoch 14/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1392 - accuracy: 0.6102 - val_loss: 1.5654 - val_accuracy: 0.4944
Epoch 15/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1086 - accuracy: 0.6210 - val_loss: 1.5868 - val_accuracy: 0.5064
Epoch 16/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0856 - accuracy: 0.6289 - val_loss: 1.6016 - val_accuracy: 0.5042
Epoch 17/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0620 - accuracy: 0.6397 - val_loss: 1.6458 - val_accuracy: 0.4968
Epoch 18/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0511 - accuracy: 0.6405 - val_loss: 1.6276 - val_accuracy: 0.5096
Epoch 19/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0203 - accuracy: 0.6514 - val_loss: 1.7246 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0024 - accuracy: 0.6598 - val_loss: 1.6570 - val_accuracy: 0.5064
Epoch 21/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9845 - accuracy: 0.6662 - val_loss: 1.6697 - val_accuracy: 0.4990
Epoch 22/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9641 - accuracy: 0.6738 - val_loss: 1.7560 - val_accuracy: 0.5010
Epoch 23/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9387 - accuracy: 0.6797 - val_loss: 1.7716 - val_accuracy: 0.5008
Epoch 24/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9290 - accuracy: 0.6852 - val_loss: 1.7688 - val_accuracy: 0.5026
Epoch 25/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9176 - accuracy: 0.6899 - val_loss: 1.8131 - val_accuracy: 0.5042
Epoch 26/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8925 - accuracy: 0.6986 - val_loss: 1.8228 - val_accuracy: 0.4904
Epoch 27/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8680 - accuracy: 0.7060 - val_loss: 1.8546 - val_accuracy: 0.5048
Epoch 28/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8638 - accuracy: 0.7091 - val_loss: 1.8004 - val_accuracy: 0.4954
157/157 [==============================] - 1s 3ms/step - loss: 1.5027 - accuracy: 0.0914
###Markdown
이 모델은 검증 세트에서 50.8% 정확도에 도달합니다. 드롭아웃이 없을 때보다(51.4%) 조금 더 나쁩니다. 하이퍼파라미터 탐색을 좀 많이 수행해 보면 더 나아 질 수 있습니다(드롭아웃 비율 5%, 10%, 20%, 40%과 학습률 1e-4, 3e-4, 5e-4, 1e-3을 시도했습니다). 하지만 이 경우에는 크지 않을 것 같습니다. 이제 MC 드롭아웃을 사용해 보죠. 앞서 사용한 `MCAlphaDropout` 클래스를 복사해 사용하겠습니다:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
방금 훈련했던 모델과 (같은 가중치를 가진) 동일한 새로운 모델을 만들어 보죠. 하지만 `AlphaDropout` 층 대신 `MCAlphaDropout` 드롭아웃 층을 사용합니다:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
그다음 몇 가지 유틸리티 함수를 추가합니다. 첫 번째 함수는 모델을 여러 번 실행합니다(기본적으로 10번). 그다음 평균한 예측 클래스 확률을 반환합니다. 두 번째 함수는 이 평균 확률을 사용해 각 샘플의 클래스를 예측합니다:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
이제 검증 세트의 모든 샘플에 대해 예측을 만들고 정확도를 계산해 보죠:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
이 경우에는 실제적인 정확도 향상이 없습니다(50.8%에서 50.9%).따라서 이 연습문에서 얻은 최상의 모델은 배치 정규화 모델입니다. f.*문제: 1사이클 스케줄링으로 모델을 다시 훈련하고 훈련 속도와 모델 정확도가 향상되는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 9ms/step - loss: 2.0537 - accuracy: 0.2843 - val_loss: 1.7811 - val_accuracy: 0.3744
Epoch 2/15
352/352 [==============================] - 3s 7ms/step - loss: 1.7635 - accuracy: 0.3765 - val_loss: 1.6431 - val_accuracy: 0.4252
Epoch 3/15
352/352 [==============================] - 3s 7ms/step - loss: 1.6241 - accuracy: 0.4217 - val_loss: 1.6001 - val_accuracy: 0.4368
Epoch 4/15
352/352 [==============================] - 3s 7ms/step - loss: 1.5434 - accuracy: 0.4520 - val_loss: 1.6114 - val_accuracy: 0.4310
Epoch 5/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4914 - accuracy: 0.4710 - val_loss: 1.5895 - val_accuracy: 0.4434
Epoch 6/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4510 - accuracy: 0.4818 - val_loss: 1.5678 - val_accuracy: 0.4506
Epoch 7/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4143 - accuracy: 0.4979 - val_loss: 1.6717 - val_accuracy: 0.4294
Epoch 8/15
352/352 [==============================] - 3s 7ms/step - loss: 1.3462 - accuracy: 0.5199 - val_loss: 1.4928 - val_accuracy: 0.4956
Epoch 9/15
352/352 [==============================] - 3s 7ms/step - loss: 1.2691 - accuracy: 0.5481 - val_loss: 1.5294 - val_accuracy: 0.4818
Epoch 10/15
352/352 [==============================] - 3s 7ms/step - loss: 1.1994 - accuracy: 0.5713 - val_loss: 1.5165 - val_accuracy: 0.4978
Epoch 11/15
352/352 [==============================] - 3s 7ms/step - loss: 1.1308 - accuracy: 0.5980 - val_loss: 1.5070 - val_accuracy: 0.5100
Epoch 12/15
352/352 [==============================] - 3s 7ms/step - loss: 1.0632 - accuracy: 0.6184 - val_loss: 1.4833 - val_accuracy: 0.5244
Epoch 13/15
352/352 [==============================] - 3s 7ms/step - loss: 0.9932 - accuracy: 0.6447 - val_loss: 1.5314 - val_accuracy: 0.5292
Epoch 14/15
352/352 [==============================] - 3s 7ms/step - loss: 0.9279 - accuracy: 0.6671 - val_loss: 1.5495 - val_accuracy: 0.5248
Epoch 15/15
352/352 [==============================] - 3s 7ms/step - loss: 0.8880 - accuracy: 0.6845 - val_loss: 1.5840 - val_accuracy: 0.5288
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 50us/sample - loss: 1.2806 - accuracy: 0.6250 - val_loss: 0.8883 - val_accuracy: 0.7152
Epoch 2/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.7954 - accuracy: 0.7373 - val_loss: 0.7135 - val_accuracy: 0.7648
Epoch 3/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6816 - accuracy: 0.7727 - val_loss: 0.6356 - val_accuracy: 0.7882
Epoch 4/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6215 - accuracy: 0.7935 - val_loss: 0.5922 - val_accuracy: 0.8012
Epoch 5/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5830 - accuracy: 0.8081 - val_loss: 0.5596 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5553 - accuracy: 0.8155 - val_loss: 0.5338 - val_accuracy: 0.8240
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5340 - accuracy: 0.8221 - val_loss: 0.5157 - val_accuracy: 0.8310
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5172 - accuracy: 0.8265 - val_loss: 0.5035 - val_accuracy: 0.8336
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5036 - accuracy: 0.8299 - val_loss: 0.4950 - val_accuracy: 0.8354
Epoch 10/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.4922 - accuracy: 0.8324 - val_loss: 0.4797 - val_accuracy: 0.8430
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
exponential_decay_fn
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.6569 - accuracy: 0.7750 - val_loss: 0.4875 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4584 - accuracy: 0.8391 - val_loss: 0.4390 - val_accuracy: 0.8476
Epoch 3/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.4124 - accuracy: 0.8541 - val_loss: 0.4102 - val_accuracy: 0.8570
Epoch 4/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3842 - accuracy: 0.8643 - val_loss: 0.3893 - val_accuracy: 0.8652
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3641 - accuracy: 0.8707 - val_loss: 0.3736 - val_accuracy: 0.8678
Epoch 6/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3456 - accuracy: 0.8781 - val_loss: 0.3652 - val_accuracy: 0.8726
Epoch 7/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3318 - accuracy: 0.8818 - val_loss: 0.3596 - val_accuracy: 0.8768
Epoch 8/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.3180 - accuracy: 0.8862 - val_loss: 0.3845 - val_accuracy: 0.8602
Epoch 9/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3062 - accuracy: 0.8893 - val_loss: 0.3824 - val_accuracy: 0.8660
Epoch 10/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2938 - accuracy: 0.8934 - val_loss: 0.3516 - val_accuracy: 0.8742
Epoch 11/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2838 - accuracy: 0.8975 - val_loss: 0.3609 - val_accuracy: 0.8740
Epoch 12/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2716 - accuracy: 0.9025 - val_loss: 0.3843 - val_accuracy: 0.8666
Epoch 13/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2541 - accuracy: 0.9091 - val_loss: 0.3282 - val_accuracy: 0.8844
Epoch 14/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2390 - accuracy: 0.9139 - val_loss: 0.3336 - val_accuracy: 0.8838
Epoch 15/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2273 - accuracy: 0.9177 - val_loss: 0.3283 - val_accuracy: 0.8884
Epoch 16/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2156 - accuracy: 0.9234 - val_loss: 0.3288 - val_accuracy: 0.8862
Epoch 17/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2062 - accuracy: 0.9265 - val_loss: 0.3215 - val_accuracy: 0.8896
Epoch 18/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1973 - accuracy: 0.9299 - val_loss: 0.3284 - val_accuracy: 0.8912
Epoch 19/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1892 - accuracy: 0.9344 - val_loss: 0.3229 - val_accuracy: 0.8904
Epoch 20/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1822 - accuracy: 0.9366 - val_loss: 0.3196 - val_accuracy: 0.8902
Epoch 21/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1758 - accuracy: 0.9388 - val_loss: 0.3184 - val_accuracy: 0.8940
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3221 - val_accuracy: 0.8912
Epoch 23/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1657 - accuracy: 0.9444 - val_loss: 0.3173 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.1630 - accuracy: 0.9457 - val_loss: 0.3162 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1610 - accuracy: 0.9464 - val_loss: 0.3169 - val_accuracy: 0.8942
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
history = model.fit(X_train_scaled, y_train)
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup Đầu tiên hãy nhập một vài mô-đun thông dụng, đảm bảo rằng Matplotlib sẽ vẽ đồ thị ngay trong notebook, và chuẩn bị một hàm để lưu đồ thị. Ta cũng kiểm tra xem Python phiên bản từ 3.5 trở lên đã được cài đặt hay chưa (mặc dù Python 2.x vẫn có thể hoạt động, phiên bản này đã bị deprecated nên chúng tôi rất khuyến khích việc sử dụng Python 3), cũng như Scikit-Learn ≥ 0.20.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.2819 - accuracy: 0.6229 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7955 - accuracy: 0.7361 - val_loss: 0.7130 - val_accuracy: 0.7658
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6816 - accuracy: 0.7721 - val_loss: 0.6427 - val_accuracy: 0.7900
Epoch 4/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.6217 - accuracy: 0.7944 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5832 - accuracy: 0.8075 - val_loss: 0.5582 - val_accuracy: 0.8202
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5553 - accuracy: 0.8156 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5338 - accuracy: 0.8224 - val_loss: 0.5157 - val_accuracy: 0.8302
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5173 - accuracy: 0.8272 - val_loss: 0.5079 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5040 - accuracy: 0.8289 - val_loss: 0.4895 - val_accuracy: 0.8388
Epoch 10/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4924 - accuracy: 0.8321 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.3461 - accuracy: 0.6209 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.8197 - accuracy: 0.7356 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6966 - accuracy: 0.7693 - val_loss: 0.6565 - val_accuracy: 0.7878
Epoch 4/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6331 - accuracy: 0.7909 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5917 - accuracy: 0.8057 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5618 - accuracy: 0.8136 - val_loss: 0.5406 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5390 - accuracy: 0.8205 - val_loss: 0.5196 - val_accuracy: 0.8312
Epoch 8/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5213 - accuracy: 0.8257 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5070 - accuracy: 0.8288 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4945 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 19s 10ms/step - loss: 1.2485 - accuracy: 0.5315 - val_loss: 0.7891 - val_accuracy: 0.6920
Epoch 2/5
1719/1719 [==============================] - 16s 10ms/step - loss: 0.8135 - accuracy: 0.6932 - val_loss: 0.6455 - val_accuracy: 0.7620
Epoch 3/5
1719/1719 [==============================] - 16s 9ms/step - loss: 0.6817 - accuracy: 0.7455 - val_loss: 0.6369 - val_accuracy: 0.7738
Epoch 4/5
1719/1719 [==============================] - 17s 10ms/step - loss: 0.6043 - accuracy: 0.7715 - val_loss: 0.6268 - val_accuracy: 0.7616
Epoch 5/5
1719/1719 [==============================] - 16s 9ms/step - loss: 0.5795 - accuracy: 0.7840 - val_loss: 0.5398 - val_accuracy: 0.8016
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 16s 8ms/step - loss: 1.8090 - accuracy: 0.2591 - val_loss: 1.2585 - val_accuracy: 0.4358
Epoch 2/5
1719/1719 [==============================] - 13s 8ms/step - loss: 1.1955 - accuracy: 0.4889 - val_loss: 1.0872 - val_accuracy: 0.5286
Epoch 3/5
1719/1719 [==============================] - 13s 8ms/step - loss: 1.0016 - accuracy: 0.5831 - val_loss: 1.2788 - val_accuracy: 0.4246
Epoch 4/5
1719/1719 [==============================] - 13s 8ms/step - loss: 0.8550 - accuracy: 0.6584 - val_loss: 0.7731 - val_accuracy: 0.6716
Epoch 5/5
1719/1719 [==============================] - 13s 8ms/step - loss: 0.8133 - accuracy: 0.6843 - val_loss: 0.7896 - val_accuracy: 0.6926
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 5s 2ms/step - loss: 0.8750 - accuracy: 0.7124 - val_loss: 0.5525 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 4s 3ms/step - loss: 0.5753 - accuracy: 0.8029 - val_loss: 0.4724 - val_accuracy: 0.8472
Epoch 3/10
1719/1719 [==============================] - 4s 3ms/step - loss: 0.5189 - accuracy: 0.8205 - val_loss: 0.4375 - val_accuracy: 0.8550
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4827 - accuracy: 0.8321 - val_loss: 0.4152 - val_accuracy: 0.8600
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4565 - accuracy: 0.8407 - val_loss: 0.3997 - val_accuracy: 0.8638
Epoch 6/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4398 - accuracy: 0.8473 - val_loss: 0.3866 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4242 - accuracy: 0.8511 - val_loss: 0.3763 - val_accuracy: 0.8706
Epoch 8/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4144 - accuracy: 0.8539 - val_loss: 0.3712 - val_accuracy: 0.8732
Epoch 9/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4023 - accuracy: 0.8582 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3914 - accuracy: 0.8623 - val_loss: 0.3573 - val_accuracy: 0.8762
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 5s 2ms/step - loss: 1.0317 - accuracy: 0.6757 - val_loss: 0.6767 - val_accuracy: 0.7816
Epoch 2/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.6791 - accuracy: 0.7793 - val_loss: 0.5566 - val_accuracy: 0.8182
Epoch 3/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5960 - accuracy: 0.8037 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5447 - accuracy: 0.8193 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5109 - accuracy: 0.8279 - val_loss: 0.4434 - val_accuracy: 0.8536
Epoch 6/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4898 - accuracy: 0.8337 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4712 - accuracy: 0.8395 - val_loss: 0.4130 - val_accuracy: 0.8572
Epoch 8/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4560 - accuracy: 0.8440 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4441 - accuracy: 0.8474 - val_loss: 0.3943 - val_accuracy: 0.8640
Epoch 10/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4333 - accuracy: 0.8505 - val_loss: 0.3874 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 31ms/step - loss: 0.2653 - accuracy: 0.9400 - val_loss: 0.2794 - val_accuracy: 0.9290
Epoch 2/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2556 - accuracy: 0.9400 - val_loss: 0.2698 - val_accuracy: 0.9310
Epoch 3/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2462 - accuracy: 0.9400 - val_loss: 0.2610 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2377 - accuracy: 0.9400 - val_loss: 0.2528 - val_accuracy: 0.9361
Epoch 1/16
7/7 [==============================] - 1s 30ms/step - loss: 0.2128 - accuracy: 0.9500 - val_loss: 0.2048 - val_accuracy: 0.9635
Epoch 2/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1703 - accuracy: 0.9550 - val_loss: 0.1723 - val_accuracy: 0.9716
Epoch 3/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1412 - accuracy: 0.9650 - val_loss: 0.1496 - val_accuracy: 0.9817
Epoch 4/16
7/7 [==============================] - 0s 10ms/step - loss: 0.1201 - accuracy: 0.9800 - val_loss: 0.1329 - val_accuracy: 0.9828
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1049 - accuracy: 0.9900 - val_loss: 0.1204 - val_accuracy: 0.9838
Epoch 6/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0932 - accuracy: 0.9950 - val_loss: 0.1105 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0840 - accuracy: 0.9950 - val_loss: 0.1023 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0764 - accuracy: 0.9950 - val_loss: 0.0955 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0700 - accuracy: 0.9950 - val_loss: 0.0894 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0643 - accuracy: 0.9950 - val_loss: 0.0846 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 12ms/step - loss: 0.0598 - accuracy: 0.9950 - val_loss: 0.0802 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 13ms/step - loss: 0.0555 - accuracy: 1.0000 - val_loss: 0.0764 - val_accuracy: 0.9878
Epoch 13/16
7/7 [==============================] - 0s 16ms/step - loss: 0.0518 - accuracy: 1.0000 - val_loss: 0.0730 - val_accuracy: 0.9878
Epoch 14/16
7/7 [==============================] - 0s 19ms/step - loss: 0.0486 - accuracy: 1.0000 - val_loss: 0.0702 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 24ms/step - loss: 0.0460 - accuracy: 1.0000 - val_loss: 0.0677 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 17ms/step - loss: 0.0436 - accuracy: 1.0000 - val_loss: 0.0653 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 2ms/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4894 - accuracy: 0.8277 - val_loss: 0.4095 - val_accuracy: 0.8594
Epoch 2/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3820 - accuracy: 0.8651 - val_loss: 0.3742 - val_accuracy: 0.8696
Epoch 3/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3487 - accuracy: 0.8769 - val_loss: 0.3735 - val_accuracy: 0.8680
Epoch 4/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3264 - accuracy: 0.8836 - val_loss: 0.3497 - val_accuracy: 0.8796
Epoch 5/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3104 - accuracy: 0.8898 - val_loss: 0.3433 - val_accuracy: 0.8796
Epoch 6/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2958 - accuracy: 0.8951 - val_loss: 0.3417 - val_accuracy: 0.8816
Epoch 7/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2854 - accuracy: 0.8986 - val_loss: 0.3357 - val_accuracy: 0.8812
Epoch 8/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2760 - accuracy: 0.9018 - val_loss: 0.3367 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2677 - accuracy: 0.9055 - val_loss: 0.3265 - val_accuracy: 0.8864
Epoch 10/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2608 - accuracy: 0.9068 - val_loss: 0.3240 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2551 - accuracy: 0.9087 - val_loss: 0.3253 - val_accuracy: 0.8866
Epoch 12/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2497 - accuracy: 0.9128 - val_loss: 0.3300 - val_accuracy: 0.8804
Epoch 13/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2449 - accuracy: 0.9138 - val_loss: 0.3219 - val_accuracy: 0.8862
Epoch 14/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2415 - accuracy: 0.9147 - val_loss: 0.3222 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2375 - accuracy: 0.9168 - val_loss: 0.3208 - val_accuracy: 0.8878
Epoch 16/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2343 - accuracy: 0.9179 - val_loss: 0.3184 - val_accuracy: 0.8886
Epoch 17/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2316 - accuracy: 0.9187 - val_loss: 0.3197 - val_accuracy: 0.8898
Epoch 18/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2291 - accuracy: 0.9197 - val_loss: 0.3169 - val_accuracy: 0.8908
Epoch 19/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2269 - accuracy: 0.9207 - val_loss: 0.3197 - val_accuracy: 0.8882
Epoch 20/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2250 - accuracy: 0.9217 - val_loss: 0.3170 - val_accuracy: 0.8896
Epoch 21/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2229 - accuracy: 0.9223 - val_loss: 0.3180 - val_accuracy: 0.8910
Epoch 22/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2216 - accuracy: 0.9223 - val_loss: 0.3164 - val_accuracy: 0.8910
Epoch 23/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2201 - accuracy: 0.9233 - val_loss: 0.3172 - val_accuracy: 0.8896
Epoch 24/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2188 - accuracy: 0.9241 - val_loss: 0.3167 - val_accuracy: 0.8904
Epoch 25/25
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2179 - accuracy: 0.9243 - val_loss: 0.3166 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 2s 4ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8336
Epoch 2/25
430/430 [==============================] - 2s 4ms/step - loss: 0.4581 - accuracy: 0.8395 - val_loss: 0.4275 - val_accuracy: 0.8522
Epoch 3/25
430/430 [==============================] - 2s 4ms/step - loss: 0.4122 - accuracy: 0.8547 - val_loss: 0.4115 - val_accuracy: 0.8582
Epoch 4/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3837 - accuracy: 0.8640 - val_loss: 0.3869 - val_accuracy: 0.8686
Epoch 5/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3639 - accuracy: 0.8718 - val_loss: 0.3766 - val_accuracy: 0.8682
Epoch 6/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3457 - accuracy: 0.8774 - val_loss: 0.3745 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3633 - val_accuracy: 0.8704
Epoch 8/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3184 - accuracy: 0.8862 - val_loss: 0.3954 - val_accuracy: 0.8598
Epoch 9/25
430/430 [==============================] - 2s 4ms/step - loss: 0.3065 - accuracy: 0.8892 - val_loss: 0.3488 - val_accuracy: 0.8768
Epoch 10/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2944 - accuracy: 0.8926 - val_loss: 0.3399 - val_accuracy: 0.8796
Epoch 11/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2838 - accuracy: 0.8961 - val_loss: 0.3452 - val_accuracy: 0.8796
Epoch 12/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2708 - accuracy: 0.9025 - val_loss: 0.3661 - val_accuracy: 0.8688
Epoch 13/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2537 - accuracy: 0.9082 - val_loss: 0.3356 - val_accuracy: 0.8836
Epoch 14/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2405 - accuracy: 0.9134 - val_loss: 0.3463 - val_accuracy: 0.8806
Epoch 15/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2280 - accuracy: 0.9183 - val_loss: 0.3259 - val_accuracy: 0.8848
Epoch 16/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3295 - val_accuracy: 0.8844
Epoch 17/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2062 - accuracy: 0.9266 - val_loss: 0.3355 - val_accuracy: 0.8866
Epoch 18/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1978 - accuracy: 0.9303 - val_loss: 0.3238 - val_accuracy: 0.8908
Epoch 19/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1892 - accuracy: 0.9338 - val_loss: 0.3233 - val_accuracy: 0.8904
Epoch 20/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1821 - accuracy: 0.9366 - val_loss: 0.3226 - val_accuracy: 0.8926
Epoch 21/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1752 - accuracy: 0.9398 - val_loss: 0.3220 - val_accuracy: 0.8916
Epoch 22/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1700 - accuracy: 0.9421 - val_loss: 0.3184 - val_accuracy: 0.8954
Epoch 23/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1654 - accuracy: 0.9438 - val_loss: 0.3189 - val_accuracy: 0.8942
Epoch 24/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1626 - accuracy: 0.9453 - val_loss: 0.3179 - val_accuracy: 0.8938
Epoch 25/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1609 - accuracy: 0.9463 - val_loss: 0.3172 - val_accuracy: 0.8940
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 9s 5ms/step - loss: 1.6313 - accuracy: 0.8113 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 7s 4ms/step - loss: 0.7187 - accuracy: 0.8273 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5838 - accuracy: 0.7997 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4209 - accuracy: 0.8443 - val_loss: 0.3397 - val_accuracy: 0.8716
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4211 - accuracy: 0.8438
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 7s 4ms/step - loss: 0.4752 - accuracy: 0.8328 - val_loss: 0.3664 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 7s 4ms/step - loss: 0.3545 - accuracy: 0.8718 - val_loss: 0.3661 - val_accuracy: 0.8688
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup This project requires Python 3.7 or above:
###Code
import sys
assert sys.version_info >= (3, 7)
###Output
_____no_output_____
###Markdown
And TensorFlow ≥ 2.8:
###Code
import tensorflow as tf
assert tf.__version__ >= "2.8.0"
###Output
_____no_output_____
###Markdown
As we did in previous chapters, let's define the default font sizes to make the figures prettier:
###Code
import matplotlib.pyplot as plt
plt.rc('font', size=14)
plt.rc('axes', labelsize=14, titlesize=14)
plt.rc('legend', fontsize=14)
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
###Output
_____no_output_____
###Markdown
And let's create the `images/deep` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:
###Code
from pathlib import Path
IMAGES_PATH = Path() / "images" / "deep"
IMAGES_PATH.mkdir(parents=True, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = IMAGES_PATH / f"{fig_id}.{fig_extension}"
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
# extra code – this cell generates and saves Figure 11–1
import numpy as np
def sigmoid(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, sigmoid(z), "b-", linewidth=2,
label=r"$\sigma(z) = \dfrac{1}{1+e^{-z}}$")
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props,
fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props,
fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props,
fontsize=14, ha="center")
plt.grid(True)
plt.axis([-5, 5, -0.2, 1.2])
plt.xlabel("$z$")
plt.legend(loc="upper left", fontsize=16)
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Xavier and He Initialization
###Code
dense = tf.keras.layers.Dense(50, activation="relu",
kernel_initializer="he_normal")
he_avg_init = tf.keras.initializers.VarianceScaling(scale=2., mode="fan_avg",
distribution="uniform")
dense = tf.keras.layers.Dense(50, activation="sigmoid",
kernel_initializer=he_avg_init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
# extra code – this cell generates and saves Figure 11–2
def leaky_relu(z, alpha):
return np.maximum(alpha * z, z)
z = np.linspace(-5, 5, 200)
plt.plot(z, leaky_relu(z, 0.1), "b-", linewidth=2, label=r"$LeakyReLU(z) = max(\alpha z, z)$")
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-1, 3.7], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.3), arrowprops=props,
fontsize=14, ha="center")
plt.xlabel("$z$")
plt.axis([-5, 5, -1, 3.7])
plt.gca().set_aspect("equal")
plt.legend()
save_fig("leaky_relu_plot")
plt.show()
leaky_relu = tf.keras.layers.LeakyReLU(alpha=0.2) # defaults to alpha=0.3
dense = tf.keras.layers.Dense(50, activation=leaky_relu,
kernel_initializer="he_normal")
model = tf.keras.models.Sequential([
# [...] # more layers
tf.keras.layers.Dense(50, kernel_initializer="he_normal"), # no activation
tf.keras.layers.LeakyReLU(alpha=0.2), # activation as a separate layer
# [...] # more layers
])
###Output
2021-12-16 11:22:41.636848: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
###Markdown
ELU Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer, and use He initialization:
###Code
dense = tf.keras.layers.Dense(50, activation="elu",
kernel_initializer="he_normal")
###Output
_____no_output_____
###Markdown
SELU By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too, and other constraints are respected, as explained in the book). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
# extra code – this cell generates and saves Figure 11–3
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1 / np.sqrt(2)) * np.exp(1 / 2) - 1)
scale_0_1 = (
(1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e))
* np.sqrt(2 * np.pi)
* (
2 * erfc(np.sqrt(2)) * np.e ** 2
+ np.pi * erfc(1 / np.sqrt(2)) ** 2 * np.e
- 2 * (2 + np.pi) * erfc(1 / np.sqrt(2)) * np.sqrt(np.e)
+ np.pi
+ 2
) ** (-1 / 2)
)
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
z = np.linspace(-5, 5, 200)
plt.plot(z, elu(z), "b-", linewidth=2, label=r"ELU$_\alpha(z) = \alpha (e^z - 1)$ if $z < 0$, else $z$")
plt.plot(z, selu(z), "r--", linewidth=2, label=r"SELU$(z) = 1.05 \, $ELU$_{1.67}(z)$")
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k:', linewidth=2)
plt.plot([-5, 5], [-1.758, -1.758], 'k:', linewidth=2)
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.axis([-5, 5, -2.2, 3.2])
plt.xlabel("$z$")
plt.gca().set_aspect("equal")
plt.legend()
save_fig("elu_selu_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Using SELU is straightforward:
###Code
dense = tf.keras.layers.Dense(50, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
**Extra material – an example of a self-regularized network using SELU**Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))
for layer in range(100):
model.add(tf.keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
fashion_mnist = tf.keras.datasets.fashion_mnist.load_data()
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist
X_train, y_train = X_train_full[:-5000], y_train_full[:-5000]
X_valid, y_valid = X_train_full[-5000:], y_train_full[-5000:]
X_train, X_valid, X_test = X_train / 255, X_valid / 255, X_test / 255
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
2021-12-16 11:22:44.499697: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
###Markdown
The network managed to learn, despite how deep it is. Now look at what happens if we try to use the ReLU activation function instead:
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))
for layer in range(100):
model.add(tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.6932 - accuracy: 0.3071 - val_loss: 1.2058 - val_accuracy: 0.5106
Epoch 2/5
1719/1719 [==============================] - 11s 6ms/step - loss: 1.1132 - accuracy: 0.5297 - val_loss: 0.9682 - val_accuracy: 0.5718
Epoch 3/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.9480 - accuracy: 0.6117 - val_loss: 1.0552 - val_accuracy: 0.5102
Epoch 4/5
1719/1719 [==============================] - 10s 6ms/step - loss: 0.9763 - accuracy: 0.6003 - val_loss: 0.7764 - val_accuracy: 0.7070
Epoch 5/5
1719/1719 [==============================] - 11s 6ms/step - loss: 0.7892 - accuracy: 0.6875 - val_loss: 0.7485 - val_accuracy: 0.7054
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. GELU, Swish and Mish
###Code
# extra code – this cell generates and saves Figure 11–4
def swish(z, beta=1):
return z * sigmoid(beta * z)
def approx_gelu(z):
return swish(z, beta=1.702)
def softplus(z):
return np.log(1 + np.exp(z))
def mish(z):
return z * np.tanh(softplus(z))
z = np.linspace(-4, 2, 200)
beta = 0.6
plt.plot(z, approx_gelu(z), "b-", linewidth=2,
label=r"GELU$(z) = z\,\Phi(z)$")
plt.plot(z, swish(z), "r--", linewidth=2,
label=r"Swish$(z) = z\,\sigma(z)$")
plt.plot(z, swish(z, beta), "r:", linewidth=2,
label=fr"Swish$_{{\beta={beta}}}(z)=z\,\sigma({beta}\,z)$")
plt.plot(z, mish(z), "g:", linewidth=3,
label=fr"Mish$(z) = z\,\tanh($softplus$(z))$")
plt.plot([-4, 2], [0, 0], 'k-')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.axis([-4, 2, -1, 2])
plt.gca().set_aspect("equal")
plt.xlabel("$z$")
plt.legend(loc="upper left")
save_fig("gelu_swish_mish_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Batch Normalization
###Code
# extra code - clear the name counters and set the random seed
tf.keras.backend.clear_session()
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(300, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation="softmax")
])
model.summary()
[(var.name, var.trainable) for var in model.layers[1].variables]
# extra code – just show that the model works! 😊
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics="accuracy")
model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5559 - accuracy: 0.8094 - val_loss: 0.4016 - val_accuracy: 0.8558
Epoch 2/2
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4083 - accuracy: 0.8561 - val_loss: 0.3676 - val_accuracy: 0.8650
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
# extra code - clear the name counters and set the random seed
tf.keras.backend.clear_session()
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(300, kernel_initializer="he_normal", use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation("relu"),
tf.keras.layers.Dense(100, kernel_initializer="he_normal", use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Activation("relu"),
tf.keras.layers.Dense(10, activation="softmax")
])
# extra code – just show that the model works! 😊
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics="accuracy")
model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 1ms/step - loss: 0.6063 - accuracy: 0.7993 - val_loss: 0.4296 - val_accuracy: 0.8418
Epoch 2/2
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4275 - accuracy: 0.8500 - val_loss: 0.3752 - val_accuracy: 0.8646
###Markdown
Gradient Clipping All `tf.keras.optimizers` accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = tf.keras.optimizers.SGD(clipvalue=1.0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer)
optimizer = tf.keras.optimizers.SGD(clipnorm=1.0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for T-shirts/tops and pullovers (classes 0 and 2).* `X_train_B`: a much smaller training set of just the first 200 images of T-shirts/tops and pullovers.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (trousers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots) are somewhat similar to classes in set B (T-shirts/tops and pullovers). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the chapter 14).
###Code
# extra code – split Fashion MNIST into tasks A and B, then train and save
# model A to "my_model_A".
pos_class_id = class_names.index("Pullover")
neg_class_id = class_names.index("T-shirt/top")
def split_dataset(X, y):
y_for_B = (y == pos_class_id) | (y == neg_class_id)
y_A = y[~y_for_B]
y_B = (y[y_for_B] == pos_class_id).astype(np.float32)
old_class_ids = list(set(range(10)) - set([neg_class_id, pos_class_id]))
for old_class_id, new_class_id in zip(old_class_ids, range(8)):
y_A[y_A == old_class_id] = new_class_id # reorder class ids for A
return ((X[~y_for_B], y_A), (X[y_for_B], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
tf.random.set_seed(42)
model_A = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(8, activation="softmax")
])
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A")
# extra code – train and evaluate model B, without reusing model A
tf.random.set_seed(42)
model_B = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(1, activation="sigmoid")
])
model_B.compile(loss="binary_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.evaluate(X_test_B, y_test_B)
###Output
Epoch 1/20
7/7 [==============================] - 0s 20ms/step - loss: 0.7167 - accuracy: 0.5450 - val_loss: 0.7052 - val_accuracy: 0.5272
Epoch 2/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6805 - accuracy: 0.5800 - val_loss: 0.6758 - val_accuracy: 0.6004
Epoch 3/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6532 - accuracy: 0.6650 - val_loss: 0.6530 - val_accuracy: 0.6746
Epoch 4/20
7/7 [==============================] - 0s 6ms/step - loss: 0.6289 - accuracy: 0.7150 - val_loss: 0.6317 - val_accuracy: 0.7517
Epoch 5/20
7/7 [==============================] - 0s 7ms/step - loss: 0.6079 - accuracy: 0.7800 - val_loss: 0.6105 - val_accuracy: 0.8091
Epoch 6/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5866 - accuracy: 0.8400 - val_loss: 0.5913 - val_accuracy: 0.8447
Epoch 7/20
7/7 [==============================] - 0s 6ms/step - loss: 0.5670 - accuracy: 0.8850 - val_loss: 0.5728 - val_accuracy: 0.8833
Epoch 8/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5499 - accuracy: 0.8900 - val_loss: 0.5571 - val_accuracy: 0.8971
Epoch 9/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5331 - accuracy: 0.9150 - val_loss: 0.5427 - val_accuracy: 0.9050
Epoch 10/20
7/7 [==============================] - 0s 7ms/step - loss: 0.5180 - accuracy: 0.9250 - val_loss: 0.5290 - val_accuracy: 0.9080
Epoch 11/20
7/7 [==============================] - 0s 6ms/step - loss: 0.5038 - accuracy: 0.9350 - val_loss: 0.5160 - val_accuracy: 0.9189
Epoch 12/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4903 - accuracy: 0.9350 - val_loss: 0.5032 - val_accuracy: 0.9228
Epoch 13/20
7/7 [==============================] - 0s 7ms/step - loss: 0.4770 - accuracy: 0.9400 - val_loss: 0.4925 - val_accuracy: 0.9228
Epoch 14/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4656 - accuracy: 0.9450 - val_loss: 0.4817 - val_accuracy: 0.9258
Epoch 15/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4546 - accuracy: 0.9550 - val_loss: 0.4708 - val_accuracy: 0.9298
Epoch 16/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4435 - accuracy: 0.9550 - val_loss: 0.4608 - val_accuracy: 0.9318
Epoch 17/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4330 - accuracy: 0.9600 - val_loss: 0.4510 - val_accuracy: 0.9337
Epoch 18/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4226 - accuracy: 0.9600 - val_loss: 0.4406 - val_accuracy: 0.9367
Epoch 19/20
7/7 [==============================] - 0s 6ms/step - loss: 0.4119 - accuracy: 0.9600 - val_loss: 0.4311 - val_accuracy: 0.9377
Epoch 20/20
7/7 [==============================] - 0s 7ms/step - loss: 0.4025 - accuracy: 0.9600 - val_loss: 0.4225 - val_accuracy: 0.9367
63/63 [==============================] - 0s 728us/step - loss: 0.4317 - accuracy: 0.9185
###Markdown
Model B reaches 91.85% accuracy on the test set. Now let's try reusing the pretrained model A.
###Code
model_A = tf.keras.models.load_model("my_model_A")
model_B_on_A = tf.keras.Sequential(model_A.layers[:-1])
model_B_on_A.add(tf.keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
tf.random.set_seed(42) # extra code – ensure reproducibility
model_A_clone = tf.keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
# extra code – creating model_B_on_A just like in the previous cell
model_B_on_A = tf.keras.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(tf.keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model_B_on_A.compile(loss="binary_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model_B_on_A.compile(loss="binary_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 23ms/step - loss: 1.7893 - accuracy: 0.5550 - val_loss: 1.3324 - val_accuracy: 0.5084
Epoch 2/4
7/7 [==============================] - 0s 7ms/step - loss: 1.1235 - accuracy: 0.5350 - val_loss: 0.9199 - val_accuracy: 0.4807
Epoch 3/4
7/7 [==============================] - 0s 7ms/step - loss: 0.8836 - accuracy: 0.5000 - val_loss: 0.8266 - val_accuracy: 0.4837
Epoch 4/4
7/7 [==============================] - 0s 7ms/step - loss: 0.8202 - accuracy: 0.5250 - val_loss: 0.7795 - val_accuracy: 0.4985
Epoch 1/16
7/7 [==============================] - 0s 21ms/step - loss: 0.7348 - accuracy: 0.6050 - val_loss: 0.6372 - val_accuracy: 0.6914
Epoch 2/16
7/7 [==============================] - 0s 7ms/step - loss: 0.6055 - accuracy: 0.7600 - val_loss: 0.5283 - val_accuracy: 0.8229
Epoch 3/16
7/7 [==============================] - 0s 7ms/step - loss: 0.4992 - accuracy: 0.8400 - val_loss: 0.4742 - val_accuracy: 0.8180
Epoch 4/16
7/7 [==============================] - 0s 6ms/step - loss: 0.4297 - accuracy: 0.8700 - val_loss: 0.4212 - val_accuracy: 0.8773
Epoch 5/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3825 - accuracy: 0.9050 - val_loss: 0.3797 - val_accuracy: 0.9031
Epoch 6/16
7/7 [==============================] - 0s 6ms/step - loss: 0.3438 - accuracy: 0.9250 - val_loss: 0.3534 - val_accuracy: 0.9149
Epoch 7/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3148 - accuracy: 0.9500 - val_loss: 0.3384 - val_accuracy: 0.9001
Epoch 8/16
7/7 [==============================] - 0s 7ms/step - loss: 0.3012 - accuracy: 0.9450 - val_loss: 0.3179 - val_accuracy: 0.9209
Epoch 9/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2767 - accuracy: 0.9650 - val_loss: 0.3043 - val_accuracy: 0.9298
Epoch 10/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2623 - accuracy: 0.9550 - val_loss: 0.2929 - val_accuracy: 0.9308
Epoch 11/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2512 - accuracy: 0.9600 - val_loss: 0.2830 - val_accuracy: 0.9327
Epoch 12/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2397 - accuracy: 0.9600 - val_loss: 0.2744 - val_accuracy: 0.9318
Epoch 13/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2295 - accuracy: 0.9600 - val_loss: 0.2675 - val_accuracy: 0.9327
Epoch 14/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2225 - accuracy: 0.9600 - val_loss: 0.2598 - val_accuracy: 0.9347
Epoch 15/16
7/7 [==============================] - 0s 6ms/step - loss: 0.2147 - accuracy: 0.9600 - val_loss: 0.2542 - val_accuracy: 0.9357
Epoch 16/16
7/7 [==============================] - 0s 7ms/step - loss: 0.2077 - accuracy: 0.9600 - val_loss: 0.2492 - val_accuracy: 0.9377
###Markdown
So, what's the final verdict?
###Code
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 667us/step - loss: 0.2546 - accuracy: 0.9385
###Markdown
Great! We got a bit of transfer: the model's accuracy went up 2 percentage points, from 91.85% to 93.85%. This means the error rate dropped by almost 25%:
###Code
1 - (100 - 93.85) / (100 - 91.85)
###Output
_____no_output_____
###Markdown
Faster Optimizers
###Code
# extra code – a little function to test an optimizer on Fashion MNIST
def build_model(seed=42):
tf.random.set_seed(seed)
return tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dense(10, activation="softmax")
])
def build_and_train_model(optimizer):
model = build_model()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
return model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
history_sgd = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6877 - accuracy: 0.7677 - val_loss: 0.4960 - val_accuracy: 0.8172
Epoch 2/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.4619 - accuracy: 0.8378 - val_loss: 0.4421 - val_accuracy: 0.8404
Epoch 3/10
1719/1719 [==============================] - 1s 868us/step - loss: 0.4179 - accuracy: 0.8525 - val_loss: 0.4188 - val_accuracy: 0.8538
Epoch 4/10
1719/1719 [==============================] - 1s 866us/step - loss: 0.3902 - accuracy: 0.8621 - val_loss: 0.3814 - val_accuracy: 0.8604
Epoch 5/10
1719/1719 [==============================] - 1s 869us/step - loss: 0.3686 - accuracy: 0.8691 - val_loss: 0.3665 - val_accuracy: 0.8656
Epoch 6/10
1719/1719 [==============================] - 2s 925us/step - loss: 0.3553 - accuracy: 0.8732 - val_loss: 0.3643 - val_accuracy: 0.8720
Epoch 7/10
1719/1719 [==============================] - 2s 908us/step - loss: 0.3385 - accuracy: 0.8778 - val_loss: 0.3611 - val_accuracy: 0.8684
Epoch 8/10
1719/1719 [==============================] - 2s 926us/step - loss: 0.3297 - accuracy: 0.8796 - val_loss: 0.3490 - val_accuracy: 0.8726
Epoch 9/10
1719/1719 [==============================] - 2s 893us/step - loss: 0.3200 - accuracy: 0.8850 - val_loss: 0.3625 - val_accuracy: 0.8666
Epoch 10/10
1719/1719 [==============================] - 2s 886us/step - loss: 0.3097 - accuracy: 0.8881 - val_loss: 0.3656 - val_accuracy: 0.8672
###Markdown
Momentum optimization
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
history_momentum = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 941us/step - loss: 0.6877 - accuracy: 0.7677 - val_loss: 0.4960 - val_accuracy: 0.8172
Epoch 2/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.4619 - accuracy: 0.8378 - val_loss: 0.4421 - val_accuracy: 0.8404
Epoch 3/10
1719/1719 [==============================] - 2s 898us/step - loss: 0.4179 - accuracy: 0.8525 - val_loss: 0.4188 - val_accuracy: 0.8538
Epoch 4/10
1719/1719 [==============================] - 2s 934us/step - loss: 0.3902 - accuracy: 0.8621 - val_loss: 0.3814 - val_accuracy: 0.8604
Epoch 5/10
1719/1719 [==============================] - 2s 910us/step - loss: 0.3686 - accuracy: 0.8691 - val_loss: 0.3665 - val_accuracy: 0.8656
Epoch 6/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3553 - accuracy: 0.8732 - val_loss: 0.3643 - val_accuracy: 0.8720
Epoch 7/10
1719/1719 [==============================] - 2s 893us/step - loss: 0.3385 - accuracy: 0.8778 - val_loss: 0.3611 - val_accuracy: 0.8684
Epoch 8/10
1719/1719 [==============================] - 2s 968us/step - loss: 0.3297 - accuracy: 0.8796 - val_loss: 0.3490 - val_accuracy: 0.8726
Epoch 9/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3200 - accuracy: 0.8850 - val_loss: 0.3625 - val_accuracy: 0.8666
Epoch 10/10
1719/1719 [==============================] - 1s 858us/step - loss: 0.3097 - accuracy: 0.8881 - val_loss: 0.3656 - val_accuracy: 0.8672
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9,
nesterov=True)
history_nesterov = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 907us/step - loss: 0.6777 - accuracy: 0.7711 - val_loss: 0.4796 - val_accuracy: 0.8260
Epoch 2/10
1719/1719 [==============================] - 2s 898us/step - loss: 0.4570 - accuracy: 0.8398 - val_loss: 0.4358 - val_accuracy: 0.8396
Epoch 3/10
1719/1719 [==============================] - 1s 872us/step - loss: 0.4140 - accuracy: 0.8537 - val_loss: 0.4013 - val_accuracy: 0.8566
Epoch 4/10
1719/1719 [==============================] - 2s 902us/step - loss: 0.3882 - accuracy: 0.8629 - val_loss: 0.3802 - val_accuracy: 0.8616
Epoch 5/10
1719/1719 [==============================] - 2s 913us/step - loss: 0.3666 - accuracy: 0.8703 - val_loss: 0.3689 - val_accuracy: 0.8638
Epoch 6/10
1719/1719 [==============================] - 2s 882us/step - loss: 0.3531 - accuracy: 0.8732 - val_loss: 0.3681 - val_accuracy: 0.8688
Epoch 7/10
1719/1719 [==============================] - 2s 958us/step - loss: 0.3375 - accuracy: 0.8784 - val_loss: 0.3658 - val_accuracy: 0.8670
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.3278 - accuracy: 0.8815 - val_loss: 0.3598 - val_accuracy: 0.8682
Epoch 9/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.3183 - accuracy: 0.8855 - val_loss: 0.3472 - val_accuracy: 0.8720
Epoch 10/10
1719/1719 [==============================] - 2s 921us/step - loss: 0.3081 - accuracy: 0.8891 - val_loss: 0.3624 - val_accuracy: 0.8708
###Markdown
AdaGrad
###Code
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.001)
history_adagrad = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.0003 - accuracy: 0.6822 - val_loss: 0.6876 - val_accuracy: 0.7744
Epoch 2/10
1719/1719 [==============================] - 2s 912us/step - loss: 0.6389 - accuracy: 0.7904 - val_loss: 0.5837 - val_accuracy: 0.8048
Epoch 3/10
1719/1719 [==============================] - 2s 930us/step - loss: 0.5682 - accuracy: 0.8105 - val_loss: 0.5379 - val_accuracy: 0.8154
Epoch 4/10
1719/1719 [==============================] - 2s 878us/step - loss: 0.5316 - accuracy: 0.8215 - val_loss: 0.5135 - val_accuracy: 0.8244
Epoch 5/10
1719/1719 [==============================] - 1s 855us/step - loss: 0.5076 - accuracy: 0.8295 - val_loss: 0.4937 - val_accuracy: 0.8288
Epoch 6/10
1719/1719 [==============================] - 1s 868us/step - loss: 0.4905 - accuracy: 0.8338 - val_loss: 0.4821 - val_accuracy: 0.8312
Epoch 7/10
1719/1719 [==============================] - 2s 940us/step - loss: 0.4776 - accuracy: 0.8371 - val_loss: 0.4705 - val_accuracy: 0.8348
Epoch 8/10
1719/1719 [==============================] - 2s 966us/step - loss: 0.4674 - accuracy: 0.8409 - val_loss: 0.4611 - val_accuracy: 0.8362
Epoch 9/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.4587 - accuracy: 0.8435 - val_loss: 0.4548 - val_accuracy: 0.8406
Epoch 10/10
1719/1719 [==============================] - 2s 873us/step - loss: 0.4511 - accuracy: 0.8458 - val_loss: 0.4469 - val_accuracy: 0.8424
###Markdown
RMSProp
###Code
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
history_rmsprop = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5138 - accuracy: 0.8135 - val_loss: 0.4413 - val_accuracy: 0.8338
Epoch 2/10
1719/1719 [==============================] - 2s 942us/step - loss: 0.3932 - accuracy: 0.8590 - val_loss: 0.4518 - val_accuracy: 0.8370
Epoch 3/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.3711 - accuracy: 0.8692 - val_loss: 0.3914 - val_accuracy: 0.8686
Epoch 4/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.3643 - accuracy: 0.8735 - val_loss: 0.4176 - val_accuracy: 0.8644
Epoch 5/10
1719/1719 [==============================] - 2s 970us/step - loss: 0.3578 - accuracy: 0.8769 - val_loss: 0.3874 - val_accuracy: 0.8696
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3561 - accuracy: 0.8775 - val_loss: 0.4650 - val_accuracy: 0.8590
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3528 - accuracy: 0.8783 - val_loss: 0.4122 - val_accuracy: 0.8774
Epoch 8/10
1719/1719 [==============================] - 2s 989us/step - loss: 0.3491 - accuracy: 0.8811 - val_loss: 0.5151 - val_accuracy: 0.8586
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3479 - accuracy: 0.8829 - val_loss: 0.4457 - val_accuracy: 0.8856
Epoch 10/10
1719/1719 [==============================] - 2s 1000us/step - loss: 0.3437 - accuracy: 0.8830 - val_loss: 0.4781 - val_accuracy: 0.8636
###Markdown
Adam Optimization
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_adam = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4949 - accuracy: 0.8220 - val_loss: 0.4110 - val_accuracy: 0.8428
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3727 - accuracy: 0.8637 - val_loss: 0.4153 - val_accuracy: 0.8370
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3372 - accuracy: 0.8756 - val_loss: 0.3600 - val_accuracy: 0.8708
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3126 - accuracy: 0.8833 - val_loss: 0.3498 - val_accuracy: 0.8760
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2965 - accuracy: 0.8901 - val_loss: 0.3264 - val_accuracy: 0.8794
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2821 - accuracy: 0.8947 - val_loss: 0.3295 - val_accuracy: 0.8782
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2672 - accuracy: 0.8993 - val_loss: 0.3473 - val_accuracy: 0.8790
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2587 - accuracy: 0.9020 - val_loss: 0.3230 - val_accuracy: 0.8818
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2500 - accuracy: 0.9057 - val_loss: 0.3676 - val_accuracy: 0.8744
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2428 - accuracy: 0.9073 - val_loss: 0.3879 - val_accuracy: 0.8696
###Markdown
**Adamax Optimization**
###Code
optimizer = tf.keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_adamax = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5327 - accuracy: 0.8151 - val_loss: 0.4402 - val_accuracy: 0.8340
Epoch 2/10
1719/1719 [==============================] - 2s 935us/step - loss: 0.3950 - accuracy: 0.8591 - val_loss: 0.3907 - val_accuracy: 0.8512
Epoch 3/10
1719/1719 [==============================] - 2s 933us/step - loss: 0.3563 - accuracy: 0.8715 - val_loss: 0.3730 - val_accuracy: 0.8676
Epoch 4/10
1719/1719 [==============================] - 2s 942us/step - loss: 0.3335 - accuracy: 0.8797 - val_loss: 0.3453 - val_accuracy: 0.8738
Epoch 5/10
1719/1719 [==============================] - 2s 993us/step - loss: 0.3129 - accuracy: 0.8853 - val_loss: 0.3270 - val_accuracy: 0.8792
Epoch 6/10
1719/1719 [==============================] - 2s 926us/step - loss: 0.2986 - accuracy: 0.8913 - val_loss: 0.3396 - val_accuracy: 0.8772
Epoch 7/10
1719/1719 [==============================] - 2s 939us/step - loss: 0.2854 - accuracy: 0.8949 - val_loss: 0.3390 - val_accuracy: 0.8770
Epoch 8/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.2757 - accuracy: 0.8984 - val_loss: 0.3147 - val_accuracy: 0.8854
Epoch 9/10
1719/1719 [==============================] - 2s 952us/step - loss: 0.2662 - accuracy: 0.9020 - val_loss: 0.3341 - val_accuracy: 0.8760
Epoch 10/10
1719/1719 [==============================] - 2s 957us/step - loss: 0.2542 - accuracy: 0.9063 - val_loss: 0.3282 - val_accuracy: 0.8780
###Markdown
**Nadam Optimization**
###Code
optimizer = tf.keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9,
beta_2=0.999)
history_nadam = build_and_train_model(optimizer) # extra code
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.4826 - accuracy: 0.8284 - val_loss: 0.4092 - val_accuracy: 0.8456
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3610 - accuracy: 0.8667 - val_loss: 0.3893 - val_accuracy: 0.8592
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3270 - accuracy: 0.8784 - val_loss: 0.3653 - val_accuracy: 0.8712
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3049 - accuracy: 0.8874 - val_loss: 0.3444 - val_accuracy: 0.8726
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2897 - accuracy: 0.8905 - val_loss: 0.3174 - val_accuracy: 0.8810
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2753 - accuracy: 0.8981 - val_loss: 0.3389 - val_accuracy: 0.8830
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2652 - accuracy: 0.9000 - val_loss: 0.3725 - val_accuracy: 0.8734
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2563 - accuracy: 0.9034 - val_loss: 0.3229 - val_accuracy: 0.8828
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2463 - accuracy: 0.9079 - val_loss: 0.3353 - val_accuracy: 0.8818
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2402 - accuracy: 0.9091 - val_loss: 0.3813 - val_accuracy: 0.8740
###Markdown
**AdamW Optimization** On Colab or Kaggle, we need to install the TensorFlow-Addons library:
###Code
if "google.colab" in sys.modules:
%pip install -q -U tensorflow-addons
import tensorflow_addons as tfa
optimizer = tfa.optimizers.AdamW(weight_decay=1e-5, learning_rate=0.001,
beta_1=0.9, beta_2=0.999)
history_adamw = build_and_train_model(optimizer) # extra code
# extra code – visualize the learning curves of all the optimizers
for loss in ("loss", "val_loss"):
plt.figure(figsize=(12, 8))
opt_names = "SGD Momentum Nesterov AdaGrad RMSProp Adam Adamax Nadam AdamW"
for history, opt_name in zip((history_sgd, history_momentum, history_nesterov,
history_adagrad, history_rmsprop, history_adam,
history_adamax, history_nadam, history_adamw),
opt_names.split()):
plt.plot(history.history[loss], label=f"{opt_name}", linewidth=3)
plt.grid()
plt.xlabel("Epochs")
plt.ylabel({"loss": "Training loss", "val_loss": "Validation loss"}[loss])
plt.legend(loc="upper left")
plt.axis([0, 9, 0.1, 0.7])
plt.show()
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
history_power_scheduling = build_and_train_model(optimizer) # extra code
# extra code – this cell plots power scheduling
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
n_epochs = 25
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1 ** (epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1 ** (epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1 ** (epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
# extra code – build and compile a model for Fashion MNIST
tf.random.set_seed(42)
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots exponential scheduling
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1 ** (1 / 20)
###Output
_____no_output_____
###Markdown
**Extra material**: if you want to update the learning rate at each iteration rather than at each epoch, you can write your own callback class:
###Code
K = tf.keras.backend
class ExponentialDecay(tf.keras.callbacks.Callback):
def __init__(self, n_steps=40_000):
super().__init__()
self.n_steps = n_steps
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
new_learning_rate = lr * 0.1 ** (1 / self.n_steps)
K.set_value(self.model.optimizer.learning_rate, new_learning_rate)
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
lr0 = 0.01
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 25
batch_size = 32
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
exp_decay = ExponentialDecay(n_steps)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
steps = np.arange(n_steps)
decay_rate = 0.1
lrs = lr0 * decay_rate ** (steps / n_steps)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
# extra code – this cell demonstrates a more general way to define
# piecewise constant scheduling.
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[(boundaries > epoch).argmax() - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
# extra code – use a tf.keras.callbacks.LearningRateScheduler like earlier
n_epochs = 25
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = build_model()
optimizer = tf.keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots piecewise constant scheduling
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
# extra code – build and compile the model
model = build_model()
optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
lr_scheduler = tf.keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
history = model.fit(X_train, y_train, epochs=n_epochs,
validation_data=(X_valid, y_valid),
callbacks=[lr_scheduler])
# extra code – this cell plots performance scheduling
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
import math
batch_size = 32
n_epochs = 25
n_steps = n_epochs * math.ceil(len(X_train) / batch_size)
scheduled_learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.01, decay_steps=n_steps, decay_rate=0.1)
optimizer = tf.keras.optimizers.SGD(learning_rate=scheduled_learning_rate)
# extra code – build and train the model
model = build_and_train_model(optimizer)
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 864us/step - loss: 0.6808 - accuracy: 0.7683 - val_loss: 0.4806 - val_accuracy: 0.8268
Epoch 2/10
1719/1719 [==============================] - 1s 812us/step - loss: 0.4686 - accuracy: 0.8359 - val_loss: 0.4420 - val_accuracy: 0.8408
Epoch 3/10
1719/1719 [==============================] - 1s 809us/step - loss: 0.4221 - accuracy: 0.8494 - val_loss: 0.4108 - val_accuracy: 0.8530
Epoch 4/10
1719/1719 [==============================] - 1s 828us/step - loss: 0.3976 - accuracy: 0.8592 - val_loss: 0.3867 - val_accuracy: 0.8582
Epoch 5/10
1719/1719 [==============================] - 1s 825us/step - loss: 0.3775 - accuracy: 0.8655 - val_loss: 0.3784 - val_accuracy: 0.8620
Epoch 6/10
1719/1719 [==============================] - 1s 817us/step - loss: 0.3633 - accuracy: 0.8705 - val_loss: 0.3796 - val_accuracy: 0.8624
Epoch 7/10
1719/1719 [==============================] - 1s 843us/step - loss: 0.3518 - accuracy: 0.8737 - val_loss: 0.3662 - val_accuracy: 0.8662
Epoch 8/10
1719/1719 [==============================] - 1s 805us/step - loss: 0.3422 - accuracy: 0.8779 - val_loss: 0.3707 - val_accuracy: 0.8628
Epoch 9/10
1719/1719 [==============================] - 1s 821us/step - loss: 0.3339 - accuracy: 0.8809 - val_loss: 0.3475 - val_accuracy: 0.8696
Epoch 10/10
1719/1719 [==============================] - 1s 829us/step - loss: 0.3266 - accuracy: 0.8826 - val_loss: 0.3473 - val_accuracy: 0.8710
###Markdown
For piecewise constant scheduling, try this:
###Code
# extra code – shows how to use PiecewiseConstantDecay
scheduled_learning_rate = tf.keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling The `ExponentialLearningRate` custom callback updates the learning rate during training, at the end of each batch. It multiplies it by a constant `factor`. It also saves the learning rate and loss at each batch. Since `logs["loss"]` is actually the mean loss since the start of the epoch, and we want to save the batch loss instead, we must compute the mean times the number of batches since the beginning of the epoch to get the total loss so far, then we subtract the total loss at the previous batch to get the current batch's loss.
###Code
K = tf.keras.backend
class ExponentialLearningRate(tf.keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_epoch_begin(self, epoch, logs=None):
self.sum_of_epoch_losses = 0
def on_batch_end(self, batch, logs=None):
mean_epoch_loss = logs["loss"] # the epoch's mean loss so far
new_sum_of_epoch_losses = mean_epoch_loss * (batch + 1)
batch_loss = new_sum_of_epoch_losses - self.sum_of_epoch_losses
self.sum_of_epoch_losses = new_sum_of_epoch_losses
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(batch_loss)
K.set_value(self.model.optimizer.learning_rate,
self.model.optimizer.learning_rate * self.factor)
###Output
_____no_output_____
###Markdown
The `find_learning_rate()` function trains the model using the `ExponentialLearningRate` callback, and it returns the learning rates and corresponding batch losses. At the end, it restores the model and its optimizer to their initial state.
###Code
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=1e-4,
max_rate=1):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = (max_rate / min_rate) ** (1 / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
###Output
_____no_output_____
###Markdown
The `plot_lr_vs_loss()` function plots the learning rates vs the losses. The optimal learning rate to use as the maximum learning rate in 1cycle is near the bottom of the curve.
###Code
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses, "b")
plt.gca().set_xscale('log')
max_loss = losses[0] + min(losses)
plt.hlines(min(losses), min(rates), max(rates), color="k")
plt.axis([min(rates), max(rates), 0, max_loss])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
plt.grid()
###Output
_____no_output_____
###Markdown
Let's build a simple Fashion MNIST model and compile it:
###Code
model = build_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's find the optimal max learning rate for 1cycle:
###Code
batch_size = 128
rates, losses = find_learning_rate(model, X_train, y_train, epochs=1,
batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
###Output
430/430 [==============================] - 1s 1ms/step - loss: 1.7725 - accuracy: 0.4122
###Markdown
Looks like the max learning rate to use for 1cycle is around 10–1. The `OneCycleScheduler` custom callback updates the learning rate at the beginning of each batch. It applies the logic described in the book: increase the learning rate linearly during about half of training, then reduce it linearly back to the initial learning rate, and lastly reduce it down to close to zero linearly for the very last part of training.
###Code
class OneCycleScheduler(tf.keras.callbacks.Callback):
def __init__(self, iterations, max_lr=1e-3, start_lr=None,
last_iterations=None, last_lr=None):
self.iterations = iterations
self.max_lr = max_lr
self.start_lr = start_lr or max_lr / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_lr = last_lr or self.start_lr / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, lr1, lr2):
return (lr2 - lr1) * (self.iteration - iter1) / (iter2 - iter1) + lr1
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
lr = self._interpolate(0, self.half_iteration, self.start_lr,
self.max_lr)
elif self.iteration < 2 * self.half_iteration:
lr = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_lr, self.start_lr)
else:
lr = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_lr, self.last_lr)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, lr)
###Output
_____no_output_____
###Markdown
Let's build and compile a simple Fashion MNIST model, then train it using the `OneCycleScheduler` callback:
###Code
model = build_model()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(),
metrics=["accuracy"])
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs,
max_lr=0.1)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.9502 - accuracy: 0.6913 - val_loss: 0.6003 - val_accuracy: 0.7874
Epoch 2/25
430/430 [==============================] - 1s 1ms/step - loss: 0.5695 - accuracy: 0.8025 - val_loss: 0.4918 - val_accuracy: 0.8248
Epoch 3/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4954 - accuracy: 0.8252 - val_loss: 0.4762 - val_accuracy: 0.8264
Epoch 4/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4515 - accuracy: 0.8402 - val_loss: 0.4261 - val_accuracy: 0.8478
Epoch 5/25
430/430 [==============================] - 1s 1ms/step - loss: 0.4225 - accuracy: 0.8492 - val_loss: 0.4066 - val_accuracy: 0.8486
Epoch 6/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3958 - accuracy: 0.8571 - val_loss: 0.4787 - val_accuracy: 0.8224
Epoch 7/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3787 - accuracy: 0.8626 - val_loss: 0.3917 - val_accuracy: 0.8566
Epoch 8/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3630 - accuracy: 0.8683 - val_loss: 0.4719 - val_accuracy: 0.8296
Epoch 9/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3512 - accuracy: 0.8724 - val_loss: 0.3673 - val_accuracy: 0.8652
Epoch 10/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3360 - accuracy: 0.8766 - val_loss: 0.4957 - val_accuracy: 0.8466
Epoch 11/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3287 - accuracy: 0.8786 - val_loss: 0.4187 - val_accuracy: 0.8370
Epoch 12/25
430/430 [==============================] - 1s 1ms/step - loss: 0.3173 - accuracy: 0.8815 - val_loss: 0.3425 - val_accuracy: 0.8728
Epoch 13/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2961 - accuracy: 0.8910 - val_loss: 0.3217 - val_accuracy: 0.8792
Epoch 14/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2818 - accuracy: 0.8958 - val_loss: 0.3734 - val_accuracy: 0.8692
Epoch 15/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2675 - accuracy: 0.9003 - val_loss: 0.3261 - val_accuracy: 0.8844
Epoch 16/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2558 - accuracy: 0.9055 - val_loss: 0.3205 - val_accuracy: 0.8820
Epoch 17/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2464 - accuracy: 0.9091 - val_loss: 0.3089 - val_accuracy: 0.8894
Epoch 18/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2368 - accuracy: 0.9115 - val_loss: 0.3130 - val_accuracy: 0.8870
Epoch 19/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2292 - accuracy: 0.9145 - val_loss: 0.3078 - val_accuracy: 0.8854
Epoch 20/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2205 - accuracy: 0.9186 - val_loss: 0.3092 - val_accuracy: 0.8886
Epoch 21/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2138 - accuracy: 0.9209 - val_loss: 0.3022 - val_accuracy: 0.8914
Epoch 22/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2073 - accuracy: 0.9232 - val_loss: 0.3054 - val_accuracy: 0.8914
Epoch 23/25
430/430 [==============================] - 1s 1ms/step - loss: 0.2020 - accuracy: 0.9261 - val_loss: 0.3026 - val_accuracy: 0.8896
Epoch 24/25
430/430 [==============================] - 1s 1ms/step - loss: 0.1989 - accuracy: 0.9273 - val_loss: 0.3020 - val_accuracy: 0.8922
Epoch 25/25
430/430 [==============================] - 1s 1ms/step - loss: 0.1967 - accuracy: 0.9276 - val_loss: 0.3016 - val_accuracy: 0.8920
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal",
kernel_regularizer=tf.keras.regularizers.l2(0.01))
###Output
_____no_output_____
###Markdown
Or use `l1(0.1)` for ℓ1 regularization with a factor of 0.1, or `l1_l2(0.1, 0.01)` for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively.
###Code
tf.random.set_seed(42) # extra code – for reproducibility
from functools import partial
RegularizedDense = partial(tf.keras.layers.Dense,
activation="relu",
kernel_initializer="he_normal",
kernel_regularizer=tf.keras.regularizers.l2(0.01))
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(100),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
# extra code – compile and train the model
optimizer = tf.keras.optimizers.SGD(learning_rate=0.02)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=2,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 2s 878us/step - loss: 3.1224 - accuracy: 0.7748 - val_loss: 1.8602 - val_accuracy: 0.8264
Epoch 2/2
1719/1719 [==============================] - 1s 814us/step - loss: 1.4263 - accuracy: 0.8159 - val_loss: 1.1269 - val_accuracy: 0.8182
###Markdown
Dropout
###Code
tf.random.set_seed(42) # extra code – for reproducibility
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(100, activation="relu",
kernel_initializer="he_normal"),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(10, activation="softmax")
])
# extra code – compile and train the model
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6703 - accuracy: 0.7536 - val_loss: 0.4498 - val_accuracy: 0.8342
Epoch 2/10
1719/1719 [==============================] - 2s 996us/step - loss: 0.5103 - accuracy: 0.8136 - val_loss: 0.4401 - val_accuracy: 0.8296
Epoch 3/10
1719/1719 [==============================] - 2s 998us/step - loss: 0.4712 - accuracy: 0.8263 - val_loss: 0.3806 - val_accuracy: 0.8554
Epoch 4/10
1719/1719 [==============================] - 2s 977us/step - loss: 0.4488 - accuracy: 0.8337 - val_loss: 0.3711 - val_accuracy: 0.8608
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4342 - accuracy: 0.8409 - val_loss: 0.3672 - val_accuracy: 0.8606
Epoch 6/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.4245 - accuracy: 0.8427 - val_loss: 0.3706 - val_accuracy: 0.8600
Epoch 7/10
1719/1719 [==============================] - 2s 995us/step - loss: 0.4131 - accuracy: 0.8467 - val_loss: 0.3582 - val_accuracy: 0.8650
Epoch 8/10
1719/1719 [==============================] - 2s 959us/step - loss: 0.4074 - accuracy: 0.8484 - val_loss: 0.3478 - val_accuracy: 0.8708
Epoch 9/10
1719/1719 [==============================] - 2s 997us/step - loss: 0.4024 - accuracy: 0.8533 - val_loss: 0.3556 - val_accuracy: 0.8690
Epoch 10/10
1719/1719 [==============================] - 2s 998us/step - loss: 0.3903 - accuracy: 0.8552 - val_loss: 0.3453 - val_accuracy: 0.8732
###Markdown
The training accuracy looks like it's lower than the validation accuracy, but that's just because dropout is only active during training. If we evaluate the model on the training set after training (i.e., with dropout turned off), we get the "real" training accuracy, which is very slightly higher than the validation accuracy and the test accuracy:
###Code
model.evaluate(X_train, y_train)
model.evaluate(X_test, y_test)
###Output
313/313 [==============================] - 0s 588us/step - loss: 0.3629 - accuracy: 0.8700
###Markdown
**Note**: make sure to use `AlphaDropout` instead of `Dropout` if you want to build a self-normalizing neural net using SELU. MC Dropout
###Code
tf.random.set_seed(42) # extra code – for reproducibility
y_probas = np.stack([model(X_test, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
model.predict(X_test[:1]).round(3)
y_proba[0].round(3)
y_std = y_probas.std(axis=0)
y_std[0].round(3)
y_pred = y_proba.argmax(axis=1)
accuracy = (y_pred == y_test).sum() / len(y_test)
accuracy
class MCDropout(tf.keras.layers.Dropout):
def call(self, inputs, training=None):
return super().call(inputs, training=True)
# extra code – shows how to convert Dropout to MCDropout in a Sequential model
Dropout = tf.keras.layers.Dropout
mc_model = tf.keras.Sequential([
MCDropout(layer.rate) if isinstance(layer, Dropout) else layer
for layer in model.layers
])
mc_model.set_weights(model.get_weights())
mc_model.summary()
###Output
Model: "sequential_25"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_22 (Flatten) (None, 784) 0
_________________________________________________________________
mc_dropout (MCDropout) (None, 784) 0
_________________________________________________________________
dense_89 (Dense) (None, 100) 78500
_________________________________________________________________
mc_dropout_1 (MCDropout) (None, 100) 0
_________________________________________________________________
dense_90 (Dense) (None, 100) 10100
_________________________________________________________________
mc_dropout_2 (MCDropout) (None, 100) 0
_________________________________________________________________
dense_91 (Dense) (None, 10) 1010
=================================================================
Total params: 89,610
Trainable params: 89,610
Non-trainable params: 0
_________________________________________________________________
###Markdown
Now we can use the model with MC Dropout:
###Code
# extra code – shows that the model works without retraining
tf.random.set_seed(42)
np.mean([mc_model.predict(X_test[:1])
for sample in range(100)], axis=0).round(2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
dense = tf.keras.layers.Dense(
100, activation="relu", kernel_initializer="he_normal",
kernel_constraint=tf.keras.constraints.max_norm(1.))
# extra code – shows how to apply max norm to every hidden layer in a model
MaxNormDense = partial(tf.keras.layers.Dense,
activation="relu", kernel_initializer="he_normal",
kernel_constraint=tf.keras.constraints.max_norm(1.))
tf.random.set_seed(42)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(100),
MaxNormDense(100),
tf.keras.layers.Dense(10, activation="softmax")
])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5500 - accuracy: 0.8015 - val_loss: 0.4510 - val_accuracy: 0.8242
Epoch 2/10
1719/1719 [==============================] - 2s 960us/step - loss: 0.4089 - accuracy: 0.8499 - val_loss: 0.3956 - val_accuracy: 0.8504
Epoch 3/10
1719/1719 [==============================] - 2s 974us/step - loss: 0.3777 - accuracy: 0.8604 - val_loss: 0.3693 - val_accuracy: 0.8680
Epoch 4/10
1719/1719 [==============================] - 2s 943us/step - loss: 0.3581 - accuracy: 0.8690 - val_loss: 0.3517 - val_accuracy: 0.8716
Epoch 5/10
1719/1719 [==============================] - 2s 949us/step - loss: 0.3416 - accuracy: 0.8729 - val_loss: 0.3433 - val_accuracy: 0.8682
Epoch 6/10
1719/1719 [==============================] - 2s 951us/step - loss: 0.3368 - accuracy: 0.8756 - val_loss: 0.4045 - val_accuracy: 0.8582
Epoch 7/10
1719/1719 [==============================] - 2s 935us/step - loss: 0.3293 - accuracy: 0.8767 - val_loss: 0.4168 - val_accuracy: 0.8476
Epoch 8/10
1719/1719 [==============================] - 2s 951us/step - loss: 0.3258 - accuracy: 0.8779 - val_loss: 0.3570 - val_accuracy: 0.8674
Epoch 9/10
1719/1719 [==============================] - 2s 970us/step - loss: 0.3269 - accuracy: 0.8787 - val_loss: 0.3702 - val_accuracy: 0.8578
Epoch 10/10
1719/1719 [==============================] - 2s 948us/step - loss: 0.3169 - accuracy: 0.8809 - val_loss: 0.3907 - val_accuracy: 0.8578
###Markdown
Exercises 1. to 7. 1. Glorot initialization and He initialization were designed to make the output standard deviation as close as possible to the input standard deviation, at least at the beginning of training. This reduces the vanishing/exploding gradients problem.2. No, all weights should be sampled independently; they should not all have the same initial value. One important goal of sampling weights randomly is to break symmetry: if all the weights have the same initial value, even if that value is not zero, then symmetry is not broken (i.e., all neurons in a given layer are equivalent), and backpropagation will be unable to break it. Concretely, this means that all the neurons in any given layer will always have the same weights. It's like having just one neuron per layer, and much slower. It is virtually impossible for such a configuration to converge to a good solution.3. It is perfectly fine to initialize the bias terms to zero. Some people like to initialize them just like weights, and that's OK too; it does not make much difference.4. ReLU is usually a good default for the hidden layers, as it is fast and yields good results. Its ability to output precisely zero can also be useful in some cases (e.g., see Chapter 17). Moreover, it can sometimes benefit from optimized implementations as well as from hardware acceleration. The leaky ReLU variants of ReLU can improve the model's quality without hindering its speed too much compared to ReLU. For large neural nets and more complex problems, GLU, Swish and Mish can give you a slightly higher quality model, but they have a computational cost. The hyperbolic tangent (tanh) can be useful in the output layer if you need to output a number in a fixed range (by default between –1 and 1), but nowadays it is not used much in hidden layers, except in recurrent nets. The sigmoid activation function is also useful in the output layer when you need to estimate a probability (e.g., for binary classification), but it is rarely used in hidden layers (there are exceptions—for example, for the coding layer of variational autoencoders; see Chapter 17). The softplus activation function is useful in the output layer when you need to ensure that the output will always be positive. The softmax activation function is useful in the output layer to estimate probabilities for mutually exclusive classes, but it is rarely (if ever) used in hidden layers.5. If you set the `momentum` hyperparameter too close to 1 (e.g., 0.99999) when using an `SGD` optimizer, then the algorithm will likely pick up a lot of speed, hopefully moving roughly toward the global minimum, but its momentum will carry it right past the minimum. Then it will slow down and come back, accelerate again, overshoot again, and so on. It may oscillate this way many times before converging, so overall it will take much longer to converge than with a smaller `momentum` value.6. One way to produce a sparse model (i.e., with most weights equal to zero) is to train the model normally, then zero out tiny weights. For more sparsity, you can apply ℓ1 regularization during training, which pushes the optimizer toward sparsity. A third option is to use the TensorFlow Model Optimization Toolkit.7. Yes, dropout does slow down training, in general roughly by a factor of two. However, it has no impact on inference speed since it is only turned on during training. MC Dropout is exactly like dropout during training, but it is still active during inference, so each inference is slowed down slightly. More importantly, when using MC Dropout you generally want to run inference 10 times or more to get better predictions. This means that making predictions is slowed down by a factor of 10 or more. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the Swish activation function.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
activation="swish",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `tf.keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(tf.keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
cifar10 = tf.keras.datasets.cifar10.load_data()
(X_train_full, y_train_full), (X_test, y_test) = cifar10
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,
restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("my_cifar10_model",
save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%load_ext tensorboard
%tensorboard --logdir=./my_cifar10_logs
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 1.5062 - accuracy: 0.4676
###Markdown
The model with the lowest validation loss gets about 46.8% accuracy on the validation set. It took 29 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve the model using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to `my_cifar10_bn_model`.
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Activation("swish"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,
restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("my_cifar10_bn_model",
save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_bn_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1403/1407 [============================>.] - ETA: 0s - loss: 2.0377 - accuracy: 0.2523INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 32s 18ms/step - loss: 2.0374 - accuracy: 0.2525 - val_loss: 1.8766 - val_accuracy: 0.3154
Epoch 2/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.7874 - accuracy: 0.3542 - val_loss: 1.8784 - val_accuracy: 0.3268
Epoch 3/100
1407/1407 [==============================] - 20s 15ms/step - loss: 1.6806 - accuracy: 0.3969 - val_loss: 1.9764 - val_accuracy: 0.3252
Epoch 4/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.6111 - accuracy: 0.4229INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 24s 17ms/step - loss: 1.6112 - accuracy: 0.4228 - val_loss: 1.7087 - val_accuracy: 0.3750
Epoch 5/100
1402/1407 [============================>.] - ETA: 0s - loss: 1.5520 - accuracy: 0.4478INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 21s 15ms/step - loss: 1.5521 - accuracy: 0.4476 - val_loss: 1.6272 - val_accuracy: 0.4176
Epoch 6/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5030 - accuracy: 0.4659INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5030 - accuracy: 0.4660 - val_loss: 1.5401 - val_accuracy: 0.4452
Epoch 7/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.4559 - accuracy: 0.4812 - val_loss: 1.6990 - val_accuracy: 0.3952
Epoch 8/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.4169 - accuracy: 0.4987INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 21s 15ms/step - loss: 1.4168 - accuracy: 0.4987 - val_loss: 1.5078 - val_accuracy: 0.4652
Epoch 9/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.3863 - accuracy: 0.5123 - val_loss: 1.5513 - val_accuracy: 0.4470
Epoch 10/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.3514 - accuracy: 0.5216 - val_loss: 1.5208 - val_accuracy: 0.4562
Epoch 11/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.3220 - accuracy: 0.5314 - val_loss: 1.7301 - val_accuracy: 0.4206
Epoch 12/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.2933 - accuracy: 0.5410INFO:tensorflow:Assets written to: my_cifar10_bn_model/assets
1407/1407 [==============================] - 25s 18ms/step - loss: 1.2931 - accuracy: 0.5410 - val_loss: 1.4909 - val_accuracy: 0.4734
Epoch 13/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.2702 - accuracy: 0.5490 - val_loss: 1.5256 - val_accuracy: 0.4636
Epoch 14/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.2424 - accuracy: 0.5591 - val_loss: 1.5569 - val_accuracy: 0.4624
Epoch 15/100
<<12 more lines>>
Epoch 21/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.1174 - accuracy: 0.6066 - val_loss: 1.5241 - val_accuracy: 0.4828
Epoch 22/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0978 - accuracy: 0.6128 - val_loss: 1.5313 - val_accuracy: 0.4772
Epoch 23/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.0844 - accuracy: 0.6198 - val_loss: 1.4993 - val_accuracy: 0.4924
Epoch 24/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.0677 - accuracy: 0.6244 - val_loss: 1.4622 - val_accuracy: 0.5078
Epoch 25/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0571 - accuracy: 0.6297 - val_loss: 1.4917 - val_accuracy: 0.4990
Epoch 26/100
1407/1407 [==============================] - 19s 14ms/step - loss: 1.0395 - accuracy: 0.6327 - val_loss: 1.4888 - val_accuracy: 0.4896
Epoch 27/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0298 - accuracy: 0.6370 - val_loss: 1.5358 - val_accuracy: 0.5024
Epoch 28/100
1407/1407 [==============================] - 18s 13ms/step - loss: 1.0150 - accuracy: 0.6444 - val_loss: 1.5219 - val_accuracy: 0.5030
Epoch 29/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.0100 - accuracy: 0.6456 - val_loss: 1.4933 - val_accuracy: 0.5098
Epoch 30/100
1407/1407 [==============================] - 20s 14ms/step - loss: 0.9956 - accuracy: 0.6492 - val_loss: 1.4756 - val_accuracy: 0.5012
Epoch 31/100
1407/1407 [==============================] - 16s 11ms/step - loss: 0.9787 - accuracy: 0.6576 - val_loss: 1.5181 - val_accuracy: 0.4936
Epoch 32/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9710 - accuracy: 0.6565 - val_loss: 1.7510 - val_accuracy: 0.4568
Epoch 33/100
1407/1407 [==============================] - 20s 14ms/step - loss: 0.9613 - accuracy: 0.6628 - val_loss: 1.5576 - val_accuracy: 0.4910
Epoch 34/100
1407/1407 [==============================] - 19s 14ms/step - loss: 0.9530 - accuracy: 0.6651 - val_loss: 1.5087 - val_accuracy: 0.5046
Epoch 35/100
1407/1407 [==============================] - 19s 13ms/step - loss: 0.9388 - accuracy: 0.6701 - val_loss: 1.5534 - val_accuracy: 0.4950
Epoch 36/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9331 - accuracy: 0.6743 - val_loss: 1.5033 - val_accuracy: 0.5046
Epoch 37/100
1407/1407 [==============================] - 19s 14ms/step - loss: 0.9144 - accuracy: 0.6808 - val_loss: 1.5679 - val_accuracy: 0.5028
157/157 [==============================] - 0s 2ms/step - loss: 1.4236 - accuracy: 0.5074
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 29 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 12 epochs and continued to make progress until the 17th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 50.7% validation accuracy instead of 46.7%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 15s instead of 10s, because of the extra computations required by the BN layers. But overall the training time (wall time) to reach the best model was shortened by about 10%. d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=20, restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"my_cifar10_selu_model", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_selu_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.9386 - accuracy: 0.3045INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 20s 13ms/step - loss: 1.9385 - accuracy: 0.3046 - val_loss: 1.8175 - val_accuracy: 0.3510
Epoch 2/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.7241 - accuracy: 0.3869INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.7241 - accuracy: 0.3869 - val_loss: 1.7677 - val_accuracy: 0.3614
Epoch 3/100
1407/1407 [==============================] - ETA: 0s - loss: 1.6272 - accuracy: 0.4263INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 18s 13ms/step - loss: 1.6272 - accuracy: 0.4263 - val_loss: 1.6878 - val_accuracy: 0.4054
Epoch 4/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5644 - accuracy: 0.4492INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 18s 13ms/step - loss: 1.5643 - accuracy: 0.4492 - val_loss: 1.6589 - val_accuracy: 0.4304
Epoch 5/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.5080 - accuracy: 0.4712INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.5080 - accuracy: 0.4712 - val_loss: 1.5651 - val_accuracy: 0.4538
Epoch 6/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.4611 - accuracy: 0.4873INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.4613 - accuracy: 0.4872 - val_loss: 1.5305 - val_accuracy: 0.4678
Epoch 7/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.4174 - accuracy: 0.5077 - val_loss: 1.5346 - val_accuracy: 0.4558
Epoch 8/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.3781 - accuracy: 0.5175INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.3781 - accuracy: 0.5175 - val_loss: 1.4773 - val_accuracy: 0.4882
Epoch 9/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.3413 - accuracy: 0.5345 - val_loss: 1.5021 - val_accuracy: 0.4764
Epoch 10/100
1407/1407 [==============================] - 15s 10ms/step - loss: 1.3182 - accuracy: 0.5422 - val_loss: 1.5709 - val_accuracy: 0.4762
Epoch 11/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2832 - accuracy: 0.5571 - val_loss: 1.5345 - val_accuracy: 0.4868
Epoch 12/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2557 - accuracy: 0.5667 - val_loss: 1.5024 - val_accuracy: 0.4900
Epoch 13/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2373 - accuracy: 0.5710 - val_loss: 1.5114 - val_accuracy: 0.5028
Epoch 14/100
1404/1407 [============================>.] - ETA: 0s - loss: 1.2071 - accuracy: 0.5846INFO:tensorflow:Assets written to: my_cifar10_selu_model/assets
1407/1407 [==============================] - 17s 12ms/step - loss: 1.2073 - accuracy: 0.5847 - val_loss: 1.4608 - val_accuracy: 0.5026
Epoch 15/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1843 - accuracy: 0.5940 - val_loss: 1.4962 - val_accuracy: 0.5038
Epoch 16/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.1617 - accuracy: 0.6026 - val_loss: 1.5255 - val_accuracy: 0.5062
Epoch 17/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1452 - accuracy: 0.6084 - val_loss: 1.5057 - val_accuracy: 0.5036
Epoch 18/100
1407/1407 [==============================] - 17s 12ms/step - loss: 1.1297 - accuracy: 0.6145 - val_loss: 1.5097 - val_accuracy: 0.5010
Epoch 19/100
1407/1407 [==============================] - 16s 12ms/step - loss: 1.1004 - accuracy: 0.6245 - val_loss: 1.5218 - val_accuracy: 0.5014
Epoch 20/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0971 - accuracy: 0.6304 - val_loss: 1.5253 - val_accuracy: 0.5090
Epoch 21/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.0670 - accuracy: 0.6345 - val_loss: 1.5006 - val_accuracy: 0.5034
Epoch 22/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.0544 - accuracy: 0.6407 - val_loss: 1.5244 - val_accuracy: 0.5010
Epoch 23/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0338 - accuracy: 0.6502 - val_loss: 1.5355 - val_accuracy: 0.5096
Epoch 24/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.0281 - accuracy: 0.6514 - val_loss: 1.5257 - val_accuracy: 0.5164
Epoch 25/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.4097 - accuracy: 0.6478 - val_loss: 1.8203 - val_accuracy: 0.3514
Epoch 26/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.3733 - accuracy: 0.5157 - val_loss: 1.5600 - val_accuracy: 0.4664
Epoch 27/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2032 - accuracy: 0.5814 - val_loss: 1.5367 - val_accuracy: 0.4944
Epoch 28/100
1407/1407 [==============================] - 16s 11ms/step - loss: 1.1291 - accuracy: 0.6121 - val_loss: 1.5333 - val_accuracy: 0.4852
Epoch 29/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0734 - accuracy: 0.6317 - val_loss: 1.5475 - val_accuracy: 0.5032
Epoch 30/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0294 - accuracy: 0.6469 - val_loss: 1.5400 - val_accuracy: 0.5052
Epoch 31/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0081 - accuracy: 0.6605 - val_loss: 1.5617 - val_accuracy: 0.4856
Epoch 32/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0109 - accuracy: 0.6603 - val_loss: 1.5727 - val_accuracy: 0.5124
Epoch 33/100
1407/1407 [==============================] - 17s 12ms/step - loss: 0.9646 - accuracy: 0.6762 - val_loss: 1.5333 - val_accuracy: 0.5174
Epoch 34/100
1407/1407 [==============================] - 16s 11ms/step - loss: 0.9597 - accuracy: 0.6789 - val_loss: 1.5601 - val_accuracy: 0.5016
157/157 [==============================] - 0s 1ms/step - loss: 1.4608 - accuracy: 0.5026
###Markdown
This model reached the first model's validation loss in just 8 epochs. After 14 epochs, it reached its lowest validation loss, with about 50.3% accuracy, which is better than the original model (46.7%), but not quite as good as the model using batch normalization (50.7%). Each epoch took only 9 seconds. So it's the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=20, restore_best_weights=True)
model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"my_cifar10_alpha_dropout_model", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = Path() / "my_cifar10_logs" / f"run_alpha_dropout_{run_index:03d}"
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.8953 - accuracy: 0.3240INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 18s 11ms/step - loss: 1.8950 - accuracy: 0.3239 - val_loss: 1.7556 - val_accuracy: 0.3812
Epoch 2/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.6618 - accuracy: 0.4129INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.6618 - accuracy: 0.4130 - val_loss: 1.6563 - val_accuracy: 0.4114
Epoch 3/100
1402/1407 [============================>.] - ETA: 0s - loss: 1.5772 - accuracy: 0.4431INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.5770 - accuracy: 0.4432 - val_loss: 1.6507 - val_accuracy: 0.4232
Epoch 4/100
1406/1407 [============================>.] - ETA: 0s - loss: 1.5081 - accuracy: 0.4673INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 15s 10ms/step - loss: 1.5081 - accuracy: 0.4672 - val_loss: 1.5892 - val_accuracy: 0.4566
Epoch 5/100
1403/1407 [============================>.] - ETA: 0s - loss: 1.4560 - accuracy: 0.4902INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 14s 10ms/step - loss: 1.4561 - accuracy: 0.4902 - val_loss: 1.5382 - val_accuracy: 0.4696
Epoch 6/100
1401/1407 [============================>.] - ETA: 0s - loss: 1.4095 - accuracy: 0.5050INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 16s 11ms/step - loss: 1.4094 - accuracy: 0.5050 - val_loss: 1.5236 - val_accuracy: 0.4818
Epoch 7/100
1401/1407 [============================>.] - ETA: 0s - loss: 1.3634 - accuracy: 0.5234INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 14s 10ms/step - loss: 1.3636 - accuracy: 0.5232 - val_loss: 1.5139 - val_accuracy: 0.4840
Epoch 8/100
1405/1407 [============================>.] - ETA: 0s - loss: 1.3297 - accuracy: 0.5377INFO:tensorflow:Assets written to: my_cifar10_alpha_dropout_model/assets
1407/1407 [==============================] - 15s 11ms/step - loss: 1.3296 - accuracy: 0.5378 - val_loss: 1.4780 - val_accuracy: 0.4982
Epoch 9/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.2907 - accuracy: 0.5485 - val_loss: 1.5151 - val_accuracy: 0.4854
Epoch 10/100
1407/1407 [==============================] - 13s 10ms/step - loss: 1.2559 - accuracy: 0.5646 - val_loss: 1.4980 - val_accuracy: 0.4976
Epoch 11/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.2221 - accuracy: 0.5767 - val_loss: 1.5199 - val_accuracy: 0.4990
Epoch 12/100
1407/1407 [==============================] - 13s 9ms/step - loss: 1.1960 - accuracy: 0.5870 - val_loss: 1.5167 - val_accuracy: 0.5030
Epoch 13/100
1407/1407 [==============================] - 14s 10ms/step - loss: 1.1684 - accuracy: 0.5955 - val_loss: 1.5815 - val_accuracy: 0.5014
Epoch 14/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.1463 - accuracy: 0.6025 - val_loss: 1.5427 - val_accuracy: 0.5112
Epoch 15/100
1407/1407 [==============================] - 13s 9ms/step - loss: 1.1125 - accuracy: 0.6169 - val_loss: 1.5868 - val_accuracy: 0.5212
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0854 - accuracy: 0.6243 - val_loss: 1.6234 - val_accuracy: 0.5090
Epoch 17/100
1407/1407 [==============================] - 15s 11ms/step - loss: 1.0668 - accuracy: 0.6328 - val_loss: 1.6162 - val_accuracy: 0.5072
Epoch 18/100
1407/1407 [==============================] - 15s 10ms/step - loss: 1.0440 - accuracy: 0.6442 - val_loss: 1.5748 - val_accuracy: 0.5162
Epoch 19/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0272 - accuracy: 0.6477 - val_loss: 1.6518 - val_accuracy: 0.5200
Epoch 20/100
1407/1407 [==============================] - 13s 10ms/step - loss: 1.0007 - accuracy: 0.6594 - val_loss: 1.6224 - val_accuracy: 0.5186
Epoch 21/100
1407/1407 [==============================] - 15s 10ms/step - loss: 0.9824 - accuracy: 0.6639 - val_loss: 1.6972 - val_accuracy: 0.5136
Epoch 22/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9660 - accuracy: 0.6714 - val_loss: 1.7210 - val_accuracy: 0.5278
Epoch 23/100
1407/1407 [==============================] - 13s 10ms/step - loss: 0.9472 - accuracy: 0.6780 - val_loss: 1.6436 - val_accuracy: 0.5006
Epoch 24/100
1407/1407 [==============================] - 14s 10ms/step - loss: 0.9314 - accuracy: 0.6819 - val_loss: 1.7059 - val_accuracy: 0.5160
Epoch 25/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9172 - accuracy: 0.6888 - val_loss: 1.6926 - val_accuracy: 0.5200
Epoch 26/100
1407/1407 [==============================] - 14s 10ms/step - loss: 0.8990 - accuracy: 0.6947 - val_loss: 1.7705 - val_accuracy: 0.5148
Epoch 27/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.8758 - accuracy: 0.7028 - val_loss: 1.7023 - val_accuracy: 0.5198
Epoch 28/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.8622 - accuracy: 0.7090 - val_loss: 1.7567 - val_accuracy: 0.5184
157/157 [==============================] - 0s 1ms/step - loss: 1.4780 - accuracy: 0.4982
###Markdown
The model reaches 48.1% accuracy on the validation set. That's worse than without dropout (50.3%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(tf.keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = tf.keras.Sequential([
(
MCAlphaDropout(layer.rate)
if isinstance(layer, tf.keras.layers.AlphaDropout)
else layer
)
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return Y_probas.argmax(axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
tf.random.set_seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = (y_pred == y_valid[:, 0]).mean()
accuracy
###Output
_____no_output_____
###Markdown
We get back to roughly the accuracy of the model without dropout in this case (about 50.3% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.SGD()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1,
batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
tf.random.set_seed(42)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(tf.keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(tf.keras.layers.AlphaDropout(rate=0.1))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
optimizer = tf.keras.optimizers.SGD(learning_rate=2e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
n_iterations = math.ceil(len(X_train_scaled) / batch_size) * n_epochs
onecycle = OneCycleScheduler(n_iterations, max_lr=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 9ms/step - loss: 2.0559 - accuracy: 0.2839 - val_loss: 1.7917 - val_accuracy: 0.3768
Epoch 2/15
352/352 [==============================] - 3s 8ms/step - loss: 1.7596 - accuracy: 0.3797 - val_loss: 1.6566 - val_accuracy: 0.4258
Epoch 3/15
352/352 [==============================] - 3s 8ms/step - loss: 1.6199 - accuracy: 0.4247 - val_loss: 1.6395 - val_accuracy: 0.4260
Epoch 4/15
352/352 [==============================] - 3s 9ms/step - loss: 1.5451 - accuracy: 0.4524 - val_loss: 1.6202 - val_accuracy: 0.4408
Epoch 5/15
352/352 [==============================] - 3s 8ms/step - loss: 1.4952 - accuracy: 0.4691 - val_loss: 1.5981 - val_accuracy: 0.4488
Epoch 6/15
352/352 [==============================] - 3s 9ms/step - loss: 1.4541 - accuracy: 0.4842 - val_loss: 1.5720 - val_accuracy: 0.4490
Epoch 7/15
352/352 [==============================] - 3s 9ms/step - loss: 1.4171 - accuracy: 0.4967 - val_loss: 1.6035 - val_accuracy: 0.4470
Epoch 8/15
352/352 [==============================] - 3s 9ms/step - loss: 1.3497 - accuracy: 0.5194 - val_loss: 1.4918 - val_accuracy: 0.4864
Epoch 9/15
352/352 [==============================] - 3s 9ms/step - loss: 1.2788 - accuracy: 0.5459 - val_loss: 1.5597 - val_accuracy: 0.4672
Epoch 10/15
352/352 [==============================] - 3s 9ms/step - loss: 1.2070 - accuracy: 0.5707 - val_loss: 1.5845 - val_accuracy: 0.4864
Epoch 11/15
352/352 [==============================] - 3s 10ms/step - loss: 1.1433 - accuracy: 0.5926 - val_loss: 1.5293 - val_accuracy: 0.4998
Epoch 12/15
352/352 [==============================] - 3s 9ms/step - loss: 1.0745 - accuracy: 0.6182 - val_loss: 1.5118 - val_accuracy: 0.5072
Epoch 13/15
352/352 [==============================] - 3s 10ms/step - loss: 1.0030 - accuracy: 0.6413 - val_loss: 1.5388 - val_accuracy: 0.5204
Epoch 14/15
352/352 [==============================] - 3s 10ms/step - loss: 0.9388 - accuracy: 0.6654 - val_loss: 1.5547 - val_accuracy: 0.5210
Epoch 15/15
352/352 [==============================] - 3s 9ms/step - loss: 0.8989 - accuracy: 0.6805 - val_loss: 1.5835 - val_accuracy: 0.5242
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup Đầu tiên hãy nhập một vài mô-đun thông dụng, đảm bảo rằng Matplotlib sẽ vẽ đồ thị ngay trong notebook, và chuẩn bị một hàm để lưu đồ thị. Ta cũng kiểm tra xem Python phiên bản từ 3.5 trở lên đã được cài đặt hay chưa (mặc dù Python 2.x vẫn có thể hoạt động, phiên bản này đã bị deprecated nên chúng tôi rất khuyến khích việc sử dụng Python 3), cũng như Scikit-Learn ≥ 0.20.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 50us/sample - loss: 1.2806 - accuracy: 0.6250 - val_loss: 0.8883 - val_accuracy: 0.7152
Epoch 2/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.7954 - accuracy: 0.7373 - val_loss: 0.7135 - val_accuracy: 0.7648
Epoch 3/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6816 - accuracy: 0.7727 - val_loss: 0.6356 - val_accuracy: 0.7882
Epoch 4/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6215 - accuracy: 0.7935 - val_loss: 0.5922 - val_accuracy: 0.8012
Epoch 5/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5830 - accuracy: 0.8081 - val_loss: 0.5596 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5553 - accuracy: 0.8155 - val_loss: 0.5338 - val_accuracy: 0.8240
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5340 - accuracy: 0.8221 - val_loss: 0.5157 - val_accuracy: 0.8310
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5172 - accuracy: 0.8265 - val_loss: 0.5035 - val_accuracy: 0.8336
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5036 - accuracy: 0.8299 - val_loss: 0.4950 - val_accuracy: 0.8354
Epoch 10/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.4922 - accuracy: 0.8324 - val_loss: 0.4797 - val_accuracy: 0.8430
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.6569 - accuracy: 0.7750 - val_loss: 0.4875 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4584 - accuracy: 0.8391 - val_loss: 0.4390 - val_accuracy: 0.8476
Epoch 3/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.4124 - accuracy: 0.8541 - val_loss: 0.4102 - val_accuracy: 0.8570
Epoch 4/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3842 - accuracy: 0.8643 - val_loss: 0.3893 - val_accuracy: 0.8652
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3641 - accuracy: 0.8707 - val_loss: 0.3736 - val_accuracy: 0.8678
Epoch 6/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3456 - accuracy: 0.8781 - val_loss: 0.3652 - val_accuracy: 0.8726
Epoch 7/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3318 - accuracy: 0.8818 - val_loss: 0.3596 - val_accuracy: 0.8768
Epoch 8/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.3180 - accuracy: 0.8862 - val_loss: 0.3845 - val_accuracy: 0.8602
Epoch 9/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3062 - accuracy: 0.8893 - val_loss: 0.3824 - val_accuracy: 0.8660
Epoch 10/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2938 - accuracy: 0.8934 - val_loss: 0.3516 - val_accuracy: 0.8742
Epoch 11/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2838 - accuracy: 0.8975 - val_loss: 0.3609 - val_accuracy: 0.8740
Epoch 12/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2716 - accuracy: 0.9025 - val_loss: 0.3843 - val_accuracy: 0.8666
Epoch 13/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2541 - accuracy: 0.9091 - val_loss: 0.3282 - val_accuracy: 0.8844
Epoch 14/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2390 - accuracy: 0.9139 - val_loss: 0.3336 - val_accuracy: 0.8838
Epoch 15/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2273 - accuracy: 0.9177 - val_loss: 0.3283 - val_accuracy: 0.8884
Epoch 16/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2156 - accuracy: 0.9234 - val_loss: 0.3288 - val_accuracy: 0.8862
Epoch 17/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2062 - accuracy: 0.9265 - val_loss: 0.3215 - val_accuracy: 0.8896
Epoch 18/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1973 - accuracy: 0.9299 - val_loss: 0.3284 - val_accuracy: 0.8912
Epoch 19/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1892 - accuracy: 0.9344 - val_loss: 0.3229 - val_accuracy: 0.8904
Epoch 20/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1822 - accuracy: 0.9366 - val_loss: 0.3196 - val_accuracy: 0.8902
Epoch 21/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1758 - accuracy: 0.9388 - val_loss: 0.3184 - val_accuracy: 0.8940
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3221 - val_accuracy: 0.8912
Epoch 23/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1657 - accuracy: 0.9444 - val_loss: 0.3173 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.1630 - accuracy: 0.9457 - val_loss: 0.3162 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1610 - accuracy: 0.9464 - val_loss: 0.3169 - val_accuracy: 0.8942
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
WARNING: Logging before flag parsing goes to stderr.
W0610 10:46:09.866298 140735810999168 deprecation.py:323] From /Users/ageron/miniconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1251: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 30us/sample - loss: 0.4926 - accuracy: 0.8268 - val_loss: 0.4229 - val_accuracy: 0.8520
Epoch 2/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.3754 - accuracy: 0.8669 - val_loss: 0.3833 - val_accuracy: 0.8634
Epoch 3/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3433 - accuracy: 0.8776 - val_loss: 0.3687 - val_accuracy: 0.8666
Epoch 4/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.3198 - accuracy: 0.8854 - val_loss: 0.3595 - val_accuracy: 0.8738
Epoch 5/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3011 - accuracy: 0.8920 - val_loss: 0.3421 - val_accuracy: 0.8764
Epoch 6/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2873 - accuracy: 0.8973 - val_loss: 0.3371 - val_accuracy: 0.8814
Epoch 7/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2738 - accuracy: 0.9026 - val_loss: 0.3312 - val_accuracy: 0.8842
Epoch 8/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2633 - accuracy: 0.9071 - val_loss: 0.3338 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2543 - accuracy: 0.9098 - val_loss: 0.3296 - val_accuracy: 0.8840
Epoch 10/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2465 - accuracy: 0.9125 - val_loss: 0.3233 - val_accuracy: 0.8874
Epoch 11/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2406 - accuracy: 0.9157 - val_loss: 0.3215 - val_accuracy: 0.8874
Epoch 12/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9173 - val_loss: 0.3237 - val_accuracy: 0.8862
Epoch 13/25
55000/55000 [==============================] - 2s 27us/sample - loss: 0.2370 - accuracy: 0.9160 - val_loss: 0.3282 - val_accuracy: 0.8856
Epoch 14/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9157 - val_loss: 0.3228 - val_accuracy: 0.8874
Epoch 15/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2362 - accuracy: 0.9162 - val_loss: 0.3261 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2339 - accuracy: 0.9167 - val_loss: 0.3336 - val_accuracy: 0.8830
Epoch 17/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2319 - accuracy: 0.9166 - val_loss: 0.3316 - val_accuracy: 0.8818
Epoch 18/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2295 - accuracy: 0.9181 - val_loss: 0.3424 - val_accuracy: 0.8786
Epoch 19/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2266 - accuracy: 0.9186 - val_loss: 0.3356 - val_accuracy: 0.8844
Epoch 20/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2250 - accuracy: 0.9186 - val_loss: 0.3486 - val_accuracy: 0.8758
Epoch 21/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2221 - accuracy: 0.9189 - val_loss: 0.3443 - val_accuracy: 0.8856
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2184 - accuracy: 0.9201 - val_loss: 0.3889 - val_accuracy: 0.8700
Epoch 23/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2040 - accuracy: 0.9266 - val_loss: 0.3216 - val_accuracy: 0.8910
Epoch 24/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1750 - accuracy: 0.9401 - val_loss: 0.3153 - val_accuracy: 0.8932
Epoch 25/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1718 - accuracy: 0.9416 - val_loss: 0.3153 - val_accuracy: 0.8940
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**11장 – 심층 신경망 훈련하기** _이 노트북은 11장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.
###Code
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 텐서플로 ≥2.0 필수
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
그레이디언트 소실과 폭주 문제
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
그림 저장: sigmoid_saturation_plot
###Markdown
Xavier 초기화와 He 초기화
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
수렴하지 않는 활성화 함수 LeakyReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
LeakyReLU를 사용해 패션 MNIST에서 신경망을 훈련해 보죠:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 5s 3ms/step - loss: 1.2819 - accuracy: 0.6229 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7955 - accuracy: 0.7361 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6816 - accuracy: 0.7721 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6217 - accuracy: 0.7943 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5832 - accuracy: 0.8075 - val_loss: 0.5582 - val_accuracy: 0.8202
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5553 - accuracy: 0.8157 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5338 - accuracy: 0.8224 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5172 - accuracy: 0.8273 - val_loss: 0.5079 - val_accuracy: 0.8282
Epoch 9/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5040 - accuracy: 0.8289 - val_loss: 0.4895 - val_accuracy: 0.8386
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4924 - accuracy: 0.8321 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
PReLU를 테스트해 보죠:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 6s 3ms/step - loss: 1.3461 - accuracy: 0.6209 - val_loss: 0.9255 - val_accuracy: 0.7184
Epoch 2/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.8197 - accuracy: 0.7355 - val_loss: 0.7305 - val_accuracy: 0.7628
Epoch 3/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6966 - accuracy: 0.7694 - val_loss: 0.6565 - val_accuracy: 0.7880
Epoch 4/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.6331 - accuracy: 0.7909 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5917 - accuracy: 0.8057 - val_loss: 0.5656 - val_accuracy: 0.8184
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5618 - accuracy: 0.8134 - val_loss: 0.5406 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5390 - accuracy: 0.8206 - val_loss: 0.5196 - val_accuracy: 0.8312
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5213 - accuracy: 0.8257 - val_loss: 0.5113 - val_accuracy: 0.8320
Epoch 9/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5070 - accuracy: 0.8288 - val_loss: 0.4916 - val_accuracy: 0.8380
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4945 - accuracy: 0.8315 - val_loss: 0.4826 - val_accuracy: 0.8396
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
그림 저장: elu_plot
###Markdown
텐서플로에서 쉽게 ELU를 적용할 수 있습니다. 층을 만들 때 활성화 함수로 지정하면 됩니다:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU Günter Klambauer, Thomas Unterthiner, Andreas Mayr는 2017년 한 [훌륭한 논문](https://arxiv.org/pdf/1706.02515.pdf)에서 SELU 활성화 함수를 소개했습니다. 훈련하는 동안 완전 연결 층만 쌓아서 신경망을 만들고 SELU 활성화 함수와 LeCun 초기화를 사용한다면 자기 정규화됩니다. 각 층의 출력이 평균과표준편차를 보존하는 경향이 있습니다. 이는 그레이디언트 소실과 폭주 문제를 막아줍니다. 그 결과로 SELU 활성화 함수는 이런 종류의 네트워크(특히 아주 깊은 네트워크)에서 다른 활성화 함수보다 뛰어난 성능을 종종 냅니다. 따라서 꼭 시도해 봐야 합니다. 하지만 SELU 활성화 함수의 자기 정규화 특징은 쉽게 깨집니다. ℓ1나 ℓ2 정규화, 드롭아웃, 맥스 노름, 스킵 연결이나 시퀀셜하지 않은 다른 토폴로지를 사용할 수 없습니다(즉 순환 신경망은 자기 정규화되지 않습니다). 하지만 실전에서 시퀀셜 CNN과 잘 동작합니다. 자기 정규화가 깨지면 SELU가 다른 활성화 함수보다 더 나은 성능을 내지 않을 것입니다.
###Code
from scipy.special import erfc
# alpha와 scale은 평균 0과 표준 편차 1로 자기 정규화합니다
# (논문에 있는 식 14 참조):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
그림 저장: selu_plot
###Markdown
기본적으로 SELU 하이퍼파라미터(`scale`과 `alpha`)는 각 뉴런의 평균 출력이 0에 가깝고 표준 편차는 1에 가깝도록 조정됩니다(입력은 평균이 0이고 표준 편차 1로 표준화되었다고 가정합니다). 이 활성화 함수를 사용하면 1,000개의 층이 있는 심층 신경망도 모든 층에 걸쳐 거의 평균이 0이고 표준 편차를 1로 유지합니다. 이를 통해 그레이디언트 폭주와 소실 문제를 피할 수 있습니다:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # 표준화된 입력
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun 초기화
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
쉽게 SELU를 사용할 수 있습니다:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
100개의 은닉층과 SELU 활성화 함수를 사용한 패션 MNIST를 위한 신경망을 만들어 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
이제 훈련해 보죠. 입력을 평균 0과 표준 편차 1로 바꾸어야 한다는 것을 잊지 마세요:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 32s 19ms/step - loss: 1.4254 - accuracy: 0.4457 - val_loss: 0.9036 - val_accuracy: 0.6758
Epoch 2/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.8673 - accuracy: 0.6903 - val_loss: 0.7675 - val_accuracy: 0.7316
Epoch 3/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.6920 - accuracy: 0.7525 - val_loss: 0.6481 - val_accuracy: 0.7694
Epoch 4/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.6801 - accuracy: 0.7533 - val_loss: 0.6137 - val_accuracy: 0.7852
Epoch 5/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.5883 - accuracy: 0.7845 - val_loss: 0.5503 - val_accuracy: 0.8036
###Markdown
대신 ReLU 활성화 함수를 사용하면 어떤 일이 일어나는지 확인해 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 33s 19ms/step - loss: 1.8139 - accuracy: 0.2607 - val_loss: 1.4307 - val_accuracy: 0.3734
Epoch 2/5
1719/1719 [==============================] - 32s 19ms/step - loss: 1.1872 - accuracy: 0.4937 - val_loss: 1.0023 - val_accuracy: 0.5844
Epoch 3/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.9595 - accuracy: 0.6029 - val_loss: 0.8268 - val_accuracy: 0.6698
Epoch 4/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.9046 - accuracy: 0.6324 - val_loss: 0.8080 - val_accuracy: 0.6908
Epoch 5/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.8454 - accuracy: 0.6642 - val_loss: 0.7522 - val_accuracy: 0.7180
###Markdown
좋지 않군요. 그레이디언트 폭주나 소실 문제가 발생한 것입니다. 배치 정규화
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8750 - accuracy: 0.7123 - val_loss: 0.5525 - val_accuracy: 0.8228
Epoch 2/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5753 - accuracy: 0.8031 - val_loss: 0.4724 - val_accuracy: 0.8476
Epoch 3/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5189 - accuracy: 0.8205 - val_loss: 0.4375 - val_accuracy: 0.8546
Epoch 4/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4827 - accuracy: 0.8322 - val_loss: 0.4152 - val_accuracy: 0.8594
Epoch 5/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4565 - accuracy: 0.8408 - val_loss: 0.3997 - val_accuracy: 0.8636
Epoch 6/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4398 - accuracy: 0.8472 - val_loss: 0.3867 - val_accuracy: 0.8700
Epoch 7/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4242 - accuracy: 0.8511 - val_loss: 0.3762 - val_accuracy: 0.8706
Epoch 8/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4144 - accuracy: 0.8541 - val_loss: 0.3710 - val_accuracy: 0.8736
Epoch 9/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4024 - accuracy: 0.8581 - val_loss: 0.3630 - val_accuracy: 0.8756
Epoch 10/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.3915 - accuracy: 0.8623 - val_loss: 0.3572 - val_accuracy: 0.8754
###Markdown
이따금 활성화 함수전에 BN을 적용해도 잘 동작합니다(여기에는 논란의 여지가 있습니다). 또한 `BatchNormalization` 층 이전의 층은 편향을 위한 항이 필요 없습니다. `BatchNormalization` 층이 이를 무효화하기 때문입니다. 따라서 필요 없는 파라미터이므로 `use_bias=False`를 지정하여 층을 만들 수 있습니다:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 8s 5ms/step - loss: 1.0317 - accuracy: 0.6757 - val_loss: 0.6767 - val_accuracy: 0.7816
Epoch 2/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.6790 - accuracy: 0.7792 - val_loss: 0.5566 - val_accuracy: 0.8180
Epoch 3/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5960 - accuracy: 0.8037 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5447 - accuracy: 0.8192 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5109 - accuracy: 0.8279 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4898 - accuracy: 0.8336 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4712 - accuracy: 0.8397 - val_loss: 0.4130 - val_accuracy: 0.8572
Epoch 8/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4560 - accuracy: 0.8441 - val_loss: 0.4035 - val_accuracy: 0.8606
Epoch 9/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4441 - accuracy: 0.8473 - val_loss: 0.3943 - val_accuracy: 0.8642
Epoch 10/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4332 - accuracy: 0.8505 - val_loss: 0.3874 - val_accuracy: 0.8662
###Markdown
그레이디언트 클리핑 모든 케라스 옵티마이저는 `clipnorm`이나 `clipvalue` 매개변수를 지원합니다:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
사전 훈련된 층 재사용하기 케라스 모델 재사용하기 패션 MNIST 훈련 세트를 두 개로 나누어 보죠:* `X_train_A`: 샌달과 셔츠(클래스 5와 6)을 제외한 모든 이미지* `X_train_B`: 샌달과 셔츠 이미지 중 처음 200개만 가진 작은 훈련 세트검증 세트와 테스트 세트도 이렇게 나눕니다. 하지만 이미지 개수는 제한하지 않습니다.A 세트(8개의 클래스를 가진 분류 문제)에서 모델을 훈련하고 이를 재사용하여 B 세트(이진 분류)를 해결해 보겠습니다. A 작업에서 B 작업으로 약간의 지식이 전달되기를 기대합니다. 왜냐하면 A 세트의 클래스(스니커즈, 앵클 부츠, 코트, 티셔츠 등)가 B 세트에 있는 클래스(샌달과 셔츠)와 조금 비슷하기 때문입니다. 하지만 `Dense` 층을 사용하기 때문에 동일한 위치에 나타난 패턴만 재사용할 수 있습니다(반대로 합성곱 층은 훨씬 많은 정보를 전송합니다. 학습한 패턴을 이미지의 어느 위치에서나 감지할 수 있기 때문입니다. CNN 장에서 자세히 알아 보겠습니다).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 39ms/step - loss: 0.5803 - accuracy: 0.6500 - val_loss: 0.5842 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 16ms/step - loss: 0.5436 - accuracy: 0.6800 - val_loss: 0.5466 - val_accuracy: 0.6724
Epoch 3/4
7/7 [==============================] - 0s 16ms/step - loss: 0.5066 - accuracy: 0.7300 - val_loss: 0.5144 - val_accuracy: 0.7099
Epoch 4/4
7/7 [==============================] - 0s 16ms/step - loss: 0.4749 - accuracy: 0.7500 - val_loss: 0.4855 - val_accuracy: 0.7312
Epoch 1/16
7/7 [==============================] - 0s 41ms/step - loss: 0.3964 - accuracy: 0.8100 - val_loss: 0.3461 - val_accuracy: 0.8631
Epoch 2/16
7/7 [==============================] - 0s 15ms/step - loss: 0.2799 - accuracy: 0.9350 - val_loss: 0.2603 - val_accuracy: 0.9260
Epoch 3/16
7/7 [==============================] - 0s 16ms/step - loss: 0.2083 - accuracy: 0.9650 - val_loss: 0.2110 - val_accuracy: 0.9544
Epoch 4/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1670 - accuracy: 0.9800 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 18ms/step - loss: 0.1397 - accuracy: 0.9800 - val_loss: 0.1562 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1198 - accuracy: 0.9950 - val_loss: 0.1394 - val_accuracy: 0.9807
Epoch 7/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1051 - accuracy: 0.9950 - val_loss: 0.1267 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 16ms/step - loss: 0.0938 - accuracy: 0.9950 - val_loss: 0.1164 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0848 - accuracy: 1.0000 - val_loss: 0.1067 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 16ms/step - loss: 0.0763 - accuracy: 1.0000 - val_loss: 0.1001 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0705 - accuracy: 1.0000 - val_loss: 0.0941 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0650 - accuracy: 1.0000 - val_loss: 0.0889 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 17ms/step - loss: 0.0603 - accuracy: 1.0000 - val_loss: 0.0840 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0560 - accuracy: 1.0000 - val_loss: 0.0804 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0526 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0497 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
마지막 점수는 어떤가요?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 2ms/step - loss: 0.0683 - accuracy: 0.9930
###Markdown
훌륭하네요! 꽤 많은 정보를 전달했습니다: 오차율이 4배나 줄었네요!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
고속 옵티마이저 모멘텀 옵티마이저
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
네스테로프 가속 경사
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam 옵티마이저
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax 옵티마이저
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam 옵티마이저
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
학습률 스케줄링 거듭제곱 스케줄링 ```lr = lr0 / (1 + steps / s)**c```* 케라스는 `c=1`과 `s = 1 / decay`을 사용합니다
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
지수 기반 스케줄링 ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
이 스케줄 함수는 두 번째 매개변수로 현재 학습률을 받을 수 있습니다:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
에포크가 아니라 반복마다 학습률을 업데이트하려면 사용자 정의 콜백 클래스를 작성해야 합니다:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# 노트: 에포크마다 `batch` 매개변수가 재설정됩니다
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # 20 에포크 동안 스텝 횟수 (배치 크기 = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
기간별 고정 스케줄링
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
성능 기반 스케줄링
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras 스케줄러
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4894 - accuracy: 0.8277 - val_loss: 0.4096 - val_accuracy: 0.8592
Epoch 2/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3820 - accuracy: 0.8650 - val_loss: 0.3742 - val_accuracy: 0.8700
Epoch 3/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3487 - accuracy: 0.8767 - val_loss: 0.3736 - val_accuracy: 0.8686
Epoch 4/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3265 - accuracy: 0.8838 - val_loss: 0.3496 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3105 - accuracy: 0.8899 - val_loss: 0.3434 - val_accuracy: 0.8800
Epoch 6/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2959 - accuracy: 0.8950 - val_loss: 0.3415 - val_accuracy: 0.8808
Epoch 7/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2855 - accuracy: 0.8987 - val_loss: 0.3354 - val_accuracy: 0.8818
Epoch 8/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2761 - accuracy: 0.9016 - val_loss: 0.3366 - val_accuracy: 0.8810
Epoch 9/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2678 - accuracy: 0.9053 - val_loss: 0.3265 - val_accuracy: 0.8852
Epoch 10/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2608 - accuracy: 0.9069 - val_loss: 0.3240 - val_accuracy: 0.8848
Epoch 11/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2551 - accuracy: 0.9088 - val_loss: 0.3251 - val_accuracy: 0.8868
Epoch 12/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2497 - accuracy: 0.9126 - val_loss: 0.3302 - val_accuracy: 0.8810
Epoch 13/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2449 - accuracy: 0.9136 - val_loss: 0.3218 - val_accuracy: 0.8872
Epoch 14/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2415 - accuracy: 0.9147 - val_loss: 0.3222 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2375 - accuracy: 0.9167 - val_loss: 0.3208 - val_accuracy: 0.8876
Epoch 16/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2343 - accuracy: 0.9179 - val_loss: 0.3185 - val_accuracy: 0.8882
Epoch 17/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2317 - accuracy: 0.9186 - val_loss: 0.3198 - val_accuracy: 0.8890
Epoch 18/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2291 - accuracy: 0.9199 - val_loss: 0.3169 - val_accuracy: 0.8904
Epoch 19/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2269 - accuracy: 0.9206 - val_loss: 0.3197 - val_accuracy: 0.8888
Epoch 20/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2250 - accuracy: 0.9220 - val_loss: 0.3169 - val_accuracy: 0.8902
Epoch 21/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2229 - accuracy: 0.9224 - val_loss: 0.3180 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2216 - accuracy: 0.9225 - val_loss: 0.3163 - val_accuracy: 0.8912
Epoch 23/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2201 - accuracy: 0.9233 - val_loss: 0.3171 - val_accuracy: 0.8906
Epoch 24/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2188 - accuracy: 0.9243 - val_loss: 0.3166 - val_accuracy: 0.8908
Epoch 25/25
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2179 - accuracy: 0.9243 - val_loss: 0.3165 - val_accuracy: 0.8904
###Markdown
구간별 고정 스케줄링은 다음을 사용하세요:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1사이클 스케줄링
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 2s 4ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 3ms/step - loss: 0.4581 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8524
Epoch 3/25
430/430 [==============================] - 1s 3ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3837 - accuracy: 0.8641 - val_loss: 0.3870 - val_accuracy: 0.8686
Epoch 5/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3639 - accuracy: 0.8717 - val_loss: 0.3765 - val_accuracy: 0.8676
Epoch 6/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3457 - accuracy: 0.8774 - val_loss: 0.3742 - val_accuracy: 0.8708
Epoch 7/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3634 - val_accuracy: 0.8704
Epoch 8/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3185 - accuracy: 0.8862 - val_loss: 0.3958 - val_accuracy: 0.8608
Epoch 9/25
430/430 [==============================] - 1s 3ms/step - loss: 0.3065 - accuracy: 0.8893 - val_loss: 0.3483 - val_accuracy: 0.8762
Epoch 10/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2945 - accuracy: 0.8924 - val_loss: 0.3396 - val_accuracy: 0.8812
Epoch 11/25
430/430 [==============================] - 2s 4ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3460 - val_accuracy: 0.8796
Epoch 12/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2709 - accuracy: 0.9023 - val_loss: 0.3644 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2536 - accuracy: 0.9081 - val_loss: 0.3350 - val_accuracy: 0.8838
Epoch 14/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2405 - accuracy: 0.9134 - val_loss: 0.3466 - val_accuracy: 0.8812
Epoch 15/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2280 - accuracy: 0.9183 - val_loss: 0.3260 - val_accuracy: 0.8840
Epoch 16/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2160 - accuracy: 0.9234 - val_loss: 0.3292 - val_accuracy: 0.8834
Epoch 17/25
430/430 [==============================] - 1s 3ms/step - loss: 0.2062 - accuracy: 0.9264 - val_loss: 0.3354 - val_accuracy: 0.8862
Epoch 18/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1978 - accuracy: 0.9305 - val_loss: 0.3236 - val_accuracy: 0.8906
Epoch 19/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8904
Epoch 20/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1821 - accuracy: 0.9369 - val_loss: 0.3221 - val_accuracy: 0.8926
Epoch 21/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1752 - accuracy: 0.9401 - val_loss: 0.3215 - val_accuracy: 0.8904
Epoch 22/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1701 - accuracy: 0.9418 - val_loss: 0.3180 - val_accuracy: 0.8956
Epoch 23/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3186 - val_accuracy: 0.8942
Epoch 24/25
430/430 [==============================] - 2s 4ms/step - loss: 0.1628 - accuracy: 0.9458 - val_loss: 0.3176 - val_accuracy: 0.8924
Epoch 25/25
430/430 [==============================] - 1s 3ms/step - loss: 0.1611 - accuracy: 0.9460 - val_loss: 0.3169 - val_accuracy: 0.8930
###Markdown
규제를 사용해 과대적합 피하기 $\ell_1$과 $\ell_2$ 규제
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 1.6313 - accuracy: 0.8113 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7187 - accuracy: 0.8273 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
드롭아웃
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5838 - accuracy: 0.7998 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4209 - accuracy: 0.8443 - val_loss: 0.3406 - val_accuracy: 0.8724
###Markdown
알파 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4167 - accuracy: 0.8463
###Markdown
MC 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
이제 MC 드롭아웃을 모델에 사용할 수 있습니다:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
맥스 노름
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4749 - accuracy: 0.8337 - val_loss: 0.3665 - val_accuracy: 0.8676
Epoch 2/2
1719/1719 [==============================] - 8s 5ms/step - loss: 0.3539 - accuracy: 0.8703 - val_loss: 0.3700 - val_accuracy: 0.8672
###Markdown
연습문제 해답 1. to 7. 부록 A 참조. 8. CIFAR10에서 딥러닝 a.*문제: 100개의 뉴런을 가진 은닉층 20개로 심층 신경망을 만들어보세요(너무 많은 것 같지만 이 연습문제의 핵심입니다). He 초기화와 ELU 활성화 함수를 사용하세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*문제: Nadam 옵티마이저와 조기 종료를 사용하여 CIFAR10 데이터셋에 이 네트워크를 훈련하세요. `keras.datasets.cifar10.load_ data()`를 사용하여 데이터를 적재할 수 있습니다. 이 데이터셋은 10개의 클래스와 32×32 크기의 컬러 이미지 60,000개로 구성됩니다(50,000개는 훈련, 10,000개는 테스트). 따라서 10개의 뉴런과 소프트맥스 활성화 함수를 사용하는 출력층이 필요합니다. 모델 구조와 하이퍼파라미터를 바꿀 때마다 적절한 학습률을 찾아야 한다는 것을 기억하세요.* 모델에 출력층을 추가합니다:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
학습률 5e-5인 Nadam 옵티마이저를 사용해 보죠. 학습률 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3, 1e-2를 테스트하고 10번의 에포크 동안 (아래 텐서보드 콜백으로) 학습 곡선을 비교해 보았습니다. 학습률 3e-5와 1e-4가 꽤 좋았기 때문에 5e-5를 시도해 보았고 조금 더 나은 결과를 냈습니다.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
CIFAR10 데이터셋을 로드하죠. 조기 종료를 사용하기 때문에 검증 세트가 필요합니다. 원본 훈련 세트에서 처음 5,000개를 검증 세트로 사용하겠습니다:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 18s 0us/step
###Markdown
이제 콜백을 만들고 모델을 훈련합니다:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 3ms/step - loss: 1.5014 - accuracy: 0.0882
###Markdown
가장 낮은 검증 손실을 내는 모델은 검증 세트에서 약 47% 정확도를 얻었습니다. 이 검증 점수에 도달하는데 39번의 에포크가 걸렸습니다. (GPU가 없는) 제 노트북에서 에포크당 약 10초 정도 걸렸습니다. 배치 정규화를 사용해 성능을 올릴 수 있는지 확인해 보죠. c.*문제: 배치 정규화를 추가하고 학습 곡선을 비교해보세요. 이전보다 빠르게 수렴하나요? 더 좋은 모델이 만들어지나요? 훈련 속도에는 어떤 영향을 미치나요?* 다음 코드는 위의 코드와 배우 비슷합니다. 몇 가지 다른 점은 아래와 같습니다:* 출력층을 제외하고 모든 `Dense` 층 다음에 (활성화 함수 전에) BN 층을 추가했습니다. 처음 은닉층 전에도 BN 층을 추가했습니다.* 학습률을 5e-4로 바꾸었습니다. 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3, 3e-3를 시도해 보고 20번 에포크 후에 검증 세트 성능이 가장 좋은 것을 선택했습니다.* run_logdir를 run_bn_* 으로 이름을 바꾸고 모델 파일 이름을 my_cifar10_bn_model.h5로 변경했습니다.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
2/1407 [..............................] - ETA: 9:29 - loss: 2.8693 - accuracy: 0.1094WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0364s vs `on_train_batch_end` time: 0.7737s). Check your callbacks.
1407/1407 [==============================] - 51s 36ms/step - loss: 1.8431 - accuracy: 0.3390 - val_loss: 1.7148 - val_accuracy: 0.3886
Epoch 2/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.6690 - accuracy: 0.4046 - val_loss: 1.6174 - val_accuracy: 0.4144
Epoch 3/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.5972 - accuracy: 0.4320 - val_loss: 1.5171 - val_accuracy: 0.4478
Epoch 4/100
1407/1407 [==============================] - 50s 35ms/step - loss: 1.5463 - accuracy: 0.4495 - val_loss: 1.4883 - val_accuracy: 0.4688
Epoch 5/100
1407/1407 [==============================] - 50s 35ms/step - loss: 1.5051 - accuracy: 0.4641 - val_loss: 1.4369 - val_accuracy: 0.4892
Epoch 6/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4684 - accuracy: 0.4793 - val_loss: 1.4056 - val_accuracy: 0.5018
Epoch 7/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4350 - accuracy: 0.4895 - val_loss: 1.4292 - val_accuracy: 0.4888
Epoch 8/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.4087 - accuracy: 0.5006 - val_loss: 1.4021 - val_accuracy: 0.5088
Epoch 9/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3834 - accuracy: 0.5095 - val_loss: 1.3738 - val_accuracy: 0.5110
Epoch 10/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3645 - accuracy: 0.5167 - val_loss: 1.3432 - val_accuracy: 0.5252
Epoch 11/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3428 - accuracy: 0.5258 - val_loss: 1.3583 - val_accuracy: 0.5132
Epoch 12/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.3227 - accuracy: 0.5316 - val_loss: 1.3820 - val_accuracy: 0.5052
Epoch 13/100
1407/1407 [==============================] - 48s 34ms/step - loss: 1.3010 - accuracy: 0.5371 - val_loss: 1.3794 - val_accuracy: 0.5094
Epoch 14/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2838 - accuracy: 0.5446 - val_loss: 1.3531 - val_accuracy: 0.5260
Epoch 15/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2621 - accuracy: 0.5548 - val_loss: 1.3641 - val_accuracy: 0.5256
Epoch 16/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2535 - accuracy: 0.5572 - val_loss: 1.3720 - val_accuracy: 0.5276
Epoch 17/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2355 - accuracy: 0.5609 - val_loss: 1.3184 - val_accuracy: 0.5348
Epoch 18/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2164 - accuracy: 0.5685 - val_loss: 1.3487 - val_accuracy: 0.5296
Epoch 19/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.2037 - accuracy: 0.5770 - val_loss: 1.3278 - val_accuracy: 0.5366
Epoch 20/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1916 - accuracy: 0.5789 - val_loss: 1.3592 - val_accuracy: 0.5260
Epoch 21/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1782 - accuracy: 0.5848 - val_loss: 1.3478 - val_accuracy: 0.5302
Epoch 22/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1587 - accuracy: 0.5913 - val_loss: 1.3477 - val_accuracy: 0.5308
Epoch 23/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1481 - accuracy: 0.5933 - val_loss: 1.3285 - val_accuracy: 0.5378
Epoch 24/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1395 - accuracy: 0.5989 - val_loss: 1.3393 - val_accuracy: 0.5388
Epoch 25/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1285 - accuracy: 0.6044 - val_loss: 1.3436 - val_accuracy: 0.5354
Epoch 26/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.1080 - accuracy: 0.6085 - val_loss: 1.3496 - val_accuracy: 0.5258
Epoch 27/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0971 - accuracy: 0.6143 - val_loss: 1.3484 - val_accuracy: 0.5350
Epoch 28/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0978 - accuracy: 0.6121 - val_loss: 1.3698 - val_accuracy: 0.5274
Epoch 29/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0825 - accuracy: 0.6198 - val_loss: 1.3416 - val_accuracy: 0.5348
Epoch 30/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0698 - accuracy: 0.6219 - val_loss: 1.3363 - val_accuracy: 0.5366
Epoch 31/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0569 - accuracy: 0.6262 - val_loss: 1.3536 - val_accuracy: 0.5356
Epoch 32/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0489 - accuracy: 0.6306 - val_loss: 1.3822 - val_accuracy: 0.5220
Epoch 33/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0387 - accuracy: 0.6338 - val_loss: 1.3633 - val_accuracy: 0.5404
Epoch 34/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0342 - accuracy: 0.6344 - val_loss: 1.3611 - val_accuracy: 0.5364
Epoch 35/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0163 - accuracy: 0.6422 - val_loss: 1.3904 - val_accuracy: 0.5356
Epoch 36/100
1407/1407 [==============================] - 49s 35ms/step - loss: 1.0137 - accuracy: 0.6421 - val_loss: 1.3795 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 49s 35ms/step - loss: 0.9991 - accuracy: 0.6491 - val_loss: 1.3334 - val_accuracy: 0.5444
157/157 [==============================] - 1s 5ms/step - loss: 1.3184 - accuracy: 0.1154
###Markdown
* *이전보다 빠르게 수렴하나요?* 훨씬 빠릅니다! 이전 모델은 가장 낮은 검증 손실에 도달하기 위해 39 에포크가 걸렸지만 BN을 사용한 새 모델은 18 에포크가 걸렸습니다. 이전 모델보다 두 배 이상 빠릅니다. BN 층은 훈련을 안정적으로 수행하고 더 큰 학습률을 사용할 수 있기 때문에 수렴이 빨라졌습니다.* *BN이 더 좋은 모델을 만드나요?* 네! 최종 모델의 성능이 47%가 아니라 55% 정확도로 더 좋습니다. 이는 아주 좋은 모델이 아니지만 적어도 이전보다는 낫습니다(합성곱 신경망이 더 낫겠지만 이는 다른 주제입니다. 14장을 참고하세요).* *BN이 훈련 속도에 영향을 미치나요?* 모델이 두 배나 빠르게 수렴했지만 각 에포크는 10초가 아니라 16초가 걸렸습니다. BN 층에서 추가된 계산 때문입니다. 따라서 전체적으로 에포크 횟수가 50% 정도 줄었지만 훈련 시간(탁상 시계 시간)은 30% 정도 줄었습니다. 결국 크게 향상되었습니다! d.*문제: 배치 정규화를 SELU로 바꾸어보세요. 네트워크가 자기 정규화하기 위해 필요한 변경 사항을 적용해보세요(즉, 입력 특성 표준화, 르쿤 정규분포 초기화, 완전 연결 층만 순차적으로 쌓은 심층 신경망 등).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 1s 3ms/step - loss: 1.4753 - accuracy: 0.1256
###Markdown
51.4% 정확도를 얻었습니다. 원래 모델보다 더 좋습니다. 하지만 배치 정규화를 사용한 모델만큼 좋지는 않습니다. 최고의 모델에 도달하는데 13 에포크가 걸렸습니다. 이는 원본 모델이나 BN 모델보다 더 빠른 것입니다. 각 에포크는 원본 모델처럼 10초만 걸렸습니다. 따라서 이 모델이 지금까지 가장 빠른 모델입니다(에포크와 탁상 시계 기준으로). e.*문제: 알파 드롭아웃으로 모델에 규제를 적용해보세요. 그다음 모델을 다시 훈련하지 않고 MC 드롭아웃으로 더 높은 정확도를 얻을 수 있는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
2/1407 [..............................] - ETA: 4:07 - loss: 2.9857 - accuracy: 0.0938WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0168s vs `on_train_batch_end` time: 0.3359s). Check your callbacks.
1407/1407 [==============================] - 23s 17ms/step - loss: 1.8896 - accuracy: 0.3275 - val_loss: 1.7313 - val_accuracy: 0.3970
Epoch 2/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.6589 - accuracy: 0.4157 - val_loss: 1.7183 - val_accuracy: 0.3916
Epoch 3/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5727 - accuracy: 0.4479 - val_loss: 1.6073 - val_accuracy: 0.4364
Epoch 4/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.5085 - accuracy: 0.4734 - val_loss: 1.5741 - val_accuracy: 0.4524
Epoch 5/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.4525 - accuracy: 0.4946 - val_loss: 1.5663 - val_accuracy: 0.4592
Epoch 6/100
1407/1407 [==============================] - 23s 16ms/step - loss: 1.4032 - accuracy: 0.5124 - val_loss: 1.5255 - val_accuracy: 0.4644
Epoch 7/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.3581 - accuracy: 0.5255 - val_loss: 1.6598 - val_accuracy: 0.4662
Epoch 8/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.3209 - accuracy: 0.5400 - val_loss: 1.5027 - val_accuracy: 0.5002
Epoch 9/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2845 - accuracy: 0.5562 - val_loss: 1.5246 - val_accuracy: 0.4896
Epoch 10/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2526 - accuracy: 0.5659 - val_loss: 1.5510 - val_accuracy: 0.4956
Epoch 11/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.2160 - accuracy: 0.5808 - val_loss: 1.5559 - val_accuracy: 0.5002
Epoch 12/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1902 - accuracy: 0.5900 - val_loss: 1.5478 - val_accuracy: 0.4968
Epoch 13/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1602 - accuracy: 0.6021 - val_loss: 1.5727 - val_accuracy: 0.5124
Epoch 14/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1392 - accuracy: 0.6102 - val_loss: 1.5654 - val_accuracy: 0.4944
Epoch 15/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.1086 - accuracy: 0.6210 - val_loss: 1.5868 - val_accuracy: 0.5064
Epoch 16/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0856 - accuracy: 0.6289 - val_loss: 1.6016 - val_accuracy: 0.5042
Epoch 17/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0620 - accuracy: 0.6397 - val_loss: 1.6458 - val_accuracy: 0.4968
Epoch 18/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0511 - accuracy: 0.6405 - val_loss: 1.6276 - val_accuracy: 0.5096
Epoch 19/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0203 - accuracy: 0.6514 - val_loss: 1.7246 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 22s 16ms/step - loss: 1.0024 - accuracy: 0.6598 - val_loss: 1.6570 - val_accuracy: 0.5064
Epoch 21/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9845 - accuracy: 0.6662 - val_loss: 1.6697 - val_accuracy: 0.4990
Epoch 22/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9641 - accuracy: 0.6738 - val_loss: 1.7560 - val_accuracy: 0.5010
Epoch 23/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9387 - accuracy: 0.6797 - val_loss: 1.7716 - val_accuracy: 0.5008
Epoch 24/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9290 - accuracy: 0.6852 - val_loss: 1.7688 - val_accuracy: 0.5026
Epoch 25/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.9176 - accuracy: 0.6899 - val_loss: 1.8131 - val_accuracy: 0.5042
Epoch 26/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8925 - accuracy: 0.6986 - val_loss: 1.8228 - val_accuracy: 0.4904
Epoch 27/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8680 - accuracy: 0.7060 - val_loss: 1.8546 - val_accuracy: 0.5048
Epoch 28/100
1407/1407 [==============================] - 22s 16ms/step - loss: 0.8638 - accuracy: 0.7091 - val_loss: 1.8004 - val_accuracy: 0.4954
157/157 [==============================] - 1s 3ms/step - loss: 1.5027 - accuracy: 0.0914
###Markdown
이 모델은 검증 세트에서 50.8% 정확도에 도달합니다. 드롭아웃이 없을 때보다(51.4%) 조금 더 나쁩니다. 하이퍼파라미터 탐색을 좀 많이 수행해 보면 더 나아 질 수 있습니다(드롭아웃 비율 5%, 10%, 20%, 40%과 학습률 1e-4, 3e-4, 5e-4, 1e-3을 시도했습니다). 하지만 이 경우에는 크지 않을 것 같습니다. 이제 MC 드롭아웃을 사용해 보죠. 앞서 사용한 `MCAlphaDropout` 클래스를 복사해 사용하겠습니다:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
방금 훈련했던 모델과 (같은 가중치를 가진) 동일한 새로운 모델을 만들어 보죠. 하지만 `AlphaDropout` 층 대신 `MCAlphaDropout` 드롭아웃 층을 사용합니다:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
그다음 몇 가지 유틸리티 함수를 추가합니다. 첫 번째 함수는 모델을 여러 번 실행합니다(기본적으로 10번). 그다음 평균한 예측 클래스 확률을 반환합니다. 두 번째 함수는 이 평균 확률을 사용해 각 샘플의 클래스를 예측합니다:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
이제 검증 세트의 모든 샘플에 대해 예측을 만들고 정확도를 계산해 보죠:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
이 경우에는 실제적인 정확도 향상이 없습니다(50.8%에서 50.9%).따라서 이 연습문에서 얻은 최상의 모델은 배치 정규화 모델입니다. f.*문제: 1사이클 스케줄링으로 모델을 다시 훈련하고 훈련 속도와 모델 정확도가 향상되는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 9ms/step - loss: 2.0537 - accuracy: 0.2843 - val_loss: 1.7811 - val_accuracy: 0.3744
Epoch 2/15
352/352 [==============================] - 3s 7ms/step - loss: 1.7635 - accuracy: 0.3765 - val_loss: 1.6431 - val_accuracy: 0.4252
Epoch 3/15
352/352 [==============================] - 3s 7ms/step - loss: 1.6241 - accuracy: 0.4217 - val_loss: 1.6001 - val_accuracy: 0.4368
Epoch 4/15
352/352 [==============================] - 3s 7ms/step - loss: 1.5434 - accuracy: 0.4520 - val_loss: 1.6114 - val_accuracy: 0.4310
Epoch 5/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4914 - accuracy: 0.4710 - val_loss: 1.5895 - val_accuracy: 0.4434
Epoch 6/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4510 - accuracy: 0.4818 - val_loss: 1.5678 - val_accuracy: 0.4506
Epoch 7/15
352/352 [==============================] - 3s 7ms/step - loss: 1.4143 - accuracy: 0.4979 - val_loss: 1.6717 - val_accuracy: 0.4294
Epoch 8/15
352/352 [==============================] - 3s 7ms/step - loss: 1.3462 - accuracy: 0.5199 - val_loss: 1.4928 - val_accuracy: 0.4956
Epoch 9/15
352/352 [==============================] - 3s 7ms/step - loss: 1.2691 - accuracy: 0.5481 - val_loss: 1.5294 - val_accuracy: 0.4818
Epoch 10/15
352/352 [==============================] - 3s 7ms/step - loss: 1.1994 - accuracy: 0.5713 - val_loss: 1.5165 - val_accuracy: 0.4978
Epoch 11/15
352/352 [==============================] - 3s 7ms/step - loss: 1.1308 - accuracy: 0.5980 - val_loss: 1.5070 - val_accuracy: 0.5100
Epoch 12/15
352/352 [==============================] - 3s 7ms/step - loss: 1.0632 - accuracy: 0.6184 - val_loss: 1.4833 - val_accuracy: 0.5244
Epoch 13/15
352/352 [==============================] - 3s 7ms/step - loss: 0.9932 - accuracy: 0.6447 - val_loss: 1.5314 - val_accuracy: 0.5292
Epoch 14/15
352/352 [==============================] - 3s 7ms/step - loss: 0.9279 - accuracy: 0.6671 - val_loss: 1.5495 - val_accuracy: 0.5248
Epoch 15/15
352/352 [==============================] - 3s 7ms/step - loss: 0.8880 - accuracy: 0.6845 - val_loss: 1.5840 - val_accuracy: 0.5288
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 50us/sample - loss: 1.2806 - accuracy: 0.6250 - val_loss: 0.8883 - val_accuracy: 0.7152
Epoch 2/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.7954 - accuracy: 0.7373 - val_loss: 0.7135 - val_accuracy: 0.7648
Epoch 3/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6816 - accuracy: 0.7727 - val_loss: 0.6356 - val_accuracy: 0.7882
Epoch 4/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6215 - accuracy: 0.7935 - val_loss: 0.5922 - val_accuracy: 0.8012
Epoch 5/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5830 - accuracy: 0.8081 - val_loss: 0.5596 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5553 - accuracy: 0.8155 - val_loss: 0.5338 - val_accuracy: 0.8240
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5340 - accuracy: 0.8221 - val_loss: 0.5157 - val_accuracy: 0.8310
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5172 - accuracy: 0.8265 - val_loss: 0.5035 - val_accuracy: 0.8336
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5036 - accuracy: 0.8299 - val_loss: 0.4950 - val_accuracy: 0.8354
Epoch 10/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.4922 - accuracy: 0.8324 - val_loss: 0.4797 - val_accuracy: 0.8430
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.6569 - accuracy: 0.7750 - val_loss: 0.4875 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4584 - accuracy: 0.8391 - val_loss: 0.4390 - val_accuracy: 0.8476
Epoch 3/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.4124 - accuracy: 0.8541 - val_loss: 0.4102 - val_accuracy: 0.8570
Epoch 4/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3842 - accuracy: 0.8643 - val_loss: 0.3893 - val_accuracy: 0.8652
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3641 - accuracy: 0.8707 - val_loss: 0.3736 - val_accuracy: 0.8678
Epoch 6/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3456 - accuracy: 0.8781 - val_loss: 0.3652 - val_accuracy: 0.8726
Epoch 7/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3318 - accuracy: 0.8818 - val_loss: 0.3596 - val_accuracy: 0.8768
Epoch 8/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.3180 - accuracy: 0.8862 - val_loss: 0.3845 - val_accuracy: 0.8602
Epoch 9/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3062 - accuracy: 0.8893 - val_loss: 0.3824 - val_accuracy: 0.8660
Epoch 10/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2938 - accuracy: 0.8934 - val_loss: 0.3516 - val_accuracy: 0.8742
Epoch 11/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2838 - accuracy: 0.8975 - val_loss: 0.3609 - val_accuracy: 0.8740
Epoch 12/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2716 - accuracy: 0.9025 - val_loss: 0.3843 - val_accuracy: 0.8666
Epoch 13/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2541 - accuracy: 0.9091 - val_loss: 0.3282 - val_accuracy: 0.8844
Epoch 14/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2390 - accuracy: 0.9139 - val_loss: 0.3336 - val_accuracy: 0.8838
Epoch 15/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2273 - accuracy: 0.9177 - val_loss: 0.3283 - val_accuracy: 0.8884
Epoch 16/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2156 - accuracy: 0.9234 - val_loss: 0.3288 - val_accuracy: 0.8862
Epoch 17/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2062 - accuracy: 0.9265 - val_loss: 0.3215 - val_accuracy: 0.8896
Epoch 18/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1973 - accuracy: 0.9299 - val_loss: 0.3284 - val_accuracy: 0.8912
Epoch 19/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1892 - accuracy: 0.9344 - val_loss: 0.3229 - val_accuracy: 0.8904
Epoch 20/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1822 - accuracy: 0.9366 - val_loss: 0.3196 - val_accuracy: 0.8902
Epoch 21/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1758 - accuracy: 0.9388 - val_loss: 0.3184 - val_accuracy: 0.8940
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3221 - val_accuracy: 0.8912
Epoch 23/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1657 - accuracy: 0.9444 - val_loss: 0.3173 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.1630 - accuracy: 0.9457 - val_loss: 0.3162 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1610 - accuracy: 0.9464 - val_loss: 0.3169 - val_accuracy: 0.8942
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 50us/sample - loss: 1.2806 - accuracy: 0.6250 - val_loss: 0.8883 - val_accuracy: 0.7152
Epoch 2/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.7954 - accuracy: 0.7373 - val_loss: 0.7135 - val_accuracy: 0.7648
Epoch 3/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6816 - accuracy: 0.7727 - val_loss: 0.6356 - val_accuracy: 0.7882
Epoch 4/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.6215 - accuracy: 0.7935 - val_loss: 0.5922 - val_accuracy: 0.8012
Epoch 5/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5830 - accuracy: 0.8081 - val_loss: 0.5596 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5553 - accuracy: 0.8155 - val_loss: 0.5338 - val_accuracy: 0.8240
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5340 - accuracy: 0.8221 - val_loss: 0.5157 - val_accuracy: 0.8310
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5172 - accuracy: 0.8265 - val_loss: 0.5035 - val_accuracy: 0.8336
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5036 - accuracy: 0.8299 - val_loss: 0.4950 - val_accuracy: 0.8354
Epoch 10/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.4922 - accuracy: 0.8324 - val_loss: 0.4797 - val_accuracy: 0.8430
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.6569 - accuracy: 0.7750 - val_loss: 0.4875 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4584 - accuracy: 0.8391 - val_loss: 0.4390 - val_accuracy: 0.8476
Epoch 3/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.4124 - accuracy: 0.8541 - val_loss: 0.4102 - val_accuracy: 0.8570
Epoch 4/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3842 - accuracy: 0.8643 - val_loss: 0.3893 - val_accuracy: 0.8652
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3641 - accuracy: 0.8707 - val_loss: 0.3736 - val_accuracy: 0.8678
Epoch 6/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3456 - accuracy: 0.8781 - val_loss: 0.3652 - val_accuracy: 0.8726
Epoch 7/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3318 - accuracy: 0.8818 - val_loss: 0.3596 - val_accuracy: 0.8768
Epoch 8/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.3180 - accuracy: 0.8862 - val_loss: 0.3845 - val_accuracy: 0.8602
Epoch 9/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3062 - accuracy: 0.8893 - val_loss: 0.3824 - val_accuracy: 0.8660
Epoch 10/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2938 - accuracy: 0.8934 - val_loss: 0.3516 - val_accuracy: 0.8742
Epoch 11/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2838 - accuracy: 0.8975 - val_loss: 0.3609 - val_accuracy: 0.8740
Epoch 12/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2716 - accuracy: 0.9025 - val_loss: 0.3843 - val_accuracy: 0.8666
Epoch 13/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2541 - accuracy: 0.9091 - val_loss: 0.3282 - val_accuracy: 0.8844
Epoch 14/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2390 - accuracy: 0.9139 - val_loss: 0.3336 - val_accuracy: 0.8838
Epoch 15/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.2273 - accuracy: 0.9177 - val_loss: 0.3283 - val_accuracy: 0.8884
Epoch 16/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.2156 - accuracy: 0.9234 - val_loss: 0.3288 - val_accuracy: 0.8862
Epoch 17/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2062 - accuracy: 0.9265 - val_loss: 0.3215 - val_accuracy: 0.8896
Epoch 18/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1973 - accuracy: 0.9299 - val_loss: 0.3284 - val_accuracy: 0.8912
Epoch 19/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1892 - accuracy: 0.9344 - val_loss: 0.3229 - val_accuracy: 0.8904
Epoch 20/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1822 - accuracy: 0.9366 - val_loss: 0.3196 - val_accuracy: 0.8902
Epoch 21/25
55000/55000 [==============================] - 1s 24us/sample - loss: 0.1758 - accuracy: 0.9388 - val_loss: 0.3184 - val_accuracy: 0.8940
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3221 - val_accuracy: 0.8912
Epoch 23/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1657 - accuracy: 0.9444 - val_loss: 0.3173 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.1630 - accuracy: 0.9457 - val_loss: 0.3162 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.1610 - accuracy: 0.9464 - val_loss: 0.3169 - val_accuracy: 0.8942
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**11장 – 심층 신경망 훈련하기** _이 노트북은 11장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.
###Code
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 텐서플로 ≥2.0 필수
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
그레이디언트 소실과 폭주 문제
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
그림 저장: sigmoid_saturation_plot
###Markdown
Xavier 초기화와 He 초기화
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
수렴하지 않는 활성화 함수 LeakyReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
LeakyReLU를 사용해 패션 MNIST에서 신경망을 훈련해 보죠:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 5s 3ms/step - loss: 1.2819 - accuracy: 0.6229 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7955 - accuracy: 0.7361 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6816 - accuracy: 0.7721 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6217 - accuracy: 0.7943 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5832 - accuracy: 0.8075 - val_loss: 0.5582 - val_accuracy: 0.8202
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5553 - accuracy: 0.8157 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5338 - accuracy: 0.8224 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5172 - accuracy: 0.8273 - val_loss: 0.5079 - val_accuracy: 0.8282
Epoch 9/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5040 - accuracy: 0.8289 - val_loss: 0.4895 - val_accuracy: 0.8386
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4924 - accuracy: 0.8321 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
PReLU를 테스트해 보죠:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 6s 3ms/step - loss: 1.3461 - accuracy: 0.6209 - val_loss: 0.9255 - val_accuracy: 0.7184
Epoch 2/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.8197 - accuracy: 0.7355 - val_loss: 0.7305 - val_accuracy: 0.7628
Epoch 3/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6966 - accuracy: 0.7694 - val_loss: 0.6565 - val_accuracy: 0.7880
Epoch 4/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.6331 - accuracy: 0.7909 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5917 - accuracy: 0.8057 - val_loss: 0.5656 - val_accuracy: 0.8184
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5618 - accuracy: 0.8134 - val_loss: 0.5406 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5390 - accuracy: 0.8206 - val_loss: 0.5196 - val_accuracy: 0.8312
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5213 - accuracy: 0.8257 - val_loss: 0.5113 - val_accuracy: 0.8320
Epoch 9/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5070 - accuracy: 0.8288 - val_loss: 0.4916 - val_accuracy: 0.8380
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4945 - accuracy: 0.8315 - val_loss: 0.4826 - val_accuracy: 0.8396
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
그림 저장: elu_plot
###Markdown
텐서플로에서 쉽게 ELU를 적용할 수 있습니다. 층을 만들 때 활성화 함수로 지정하면 됩니다:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU Günter Klambauer, Thomas Unterthiner, Andreas Mayr는 2017년 한 [훌륭한 논문](https://arxiv.org/pdf/1706.02515.pdf)에서 SELU 활성화 함수를 소개했습니다. 훈련하는 동안 완전 연결 층만 쌓아서 신경망을 만들고 SELU 활성화 함수와 LeCun 초기화를 사용한다면 자기 정규화됩니다. 각 층의 출력이 평균과표준편차를 보존하는 경향이 있습니다. 이는 그레이디언트 소실과 폭주 문제를 막아줍니다. 그 결과로 SELU 활성화 함수는 이런 종류의 네트워크(특히 아주 깊은 네트워크)에서 다른 활성화 함수보다 뛰어난 성능을 종종 냅니다. 따라서 꼭 시도해 봐야 합니다. 하지만 SELU 활성화 함수의 자기 정규화 특징은 쉽게 깨집니다. ℓ1나 ℓ2 정규화, 드롭아웃, 맥스 노름, 스킵 연결이나 시퀀셜하지 않은 다른 토폴로지를 사용할 수 없습니다(즉 순환 신경망은 자기 정규화되지 않습니다). 하지만 실전에서 시퀀셜 CNN과 잘 동작합니다. 자기 정규화가 깨지면 SELU가 다른 활성화 함수보다 더 나은 성능을 내지 않을 것입니다.
###Code
from scipy.special import erfc
# alpha와 scale은 평균 0과 표준 편차 1로 자기 정규화합니다
# (논문에 있는 식 14 참조):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
그림 저장: selu_plot
###Markdown
기본적으로 SELU 하이퍼파라미터(`scale`과 `alpha`)는 각 뉴런의 평균 출력이 0에 가깝고 표준 편차는 1에 가깝도록 조정됩니다(입력은 평균이 0이고 표준 편차 1로 표준화되었다고 가정합니다). 이 활성화 함수를 사용하면 1,000개의 층이 있는 심층 신경망도 모든 층에 걸쳐 거의 평균이 0이고 표준 편차를 1로 유지합니다. 이를 통해 그레이디언트 폭주와 소실 문제를 피할 수 있습니다:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # 표준화된 입력
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun 초기화
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
쉽게 SELU를 사용할 수 있습니다:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
100개의 은닉층과 SELU 활성화 함수를 사용한 패션 MNIST를 위한 신경망을 만들어 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
이제 훈련해 보죠. 입력을 평균 0과 표준 편차 1로 바꾸어야 한다는 것을 잊지 마세요:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 32s 19ms/step - loss: 1.4254 - accuracy: 0.4457 - val_loss: 0.9036 - val_accuracy: 0.6758
Epoch 2/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.8673 - accuracy: 0.6903 - val_loss: 0.7675 - val_accuracy: 0.7316
Epoch 3/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.6920 - accuracy: 0.7525 - val_loss: 0.6481 - val_accuracy: 0.7694
Epoch 4/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.6801 - accuracy: 0.7533 - val_loss: 0.6137 - val_accuracy: 0.7852
Epoch 5/5
1719/1719 [==============================] - 32s 18ms/step - loss: 0.5883 - accuracy: 0.7845 - val_loss: 0.5503 - val_accuracy: 0.8036
###Markdown
대신 ReLU 활성화 함수를 사용하면 어떤 일이 일어나는지 확인해 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 33s 19ms/step - loss: 1.8139 - accuracy: 0.2607 - val_loss: 1.4307 - val_accuracy: 0.3734
Epoch 2/5
1719/1719 [==============================] - 32s 19ms/step - loss: 1.1872 - accuracy: 0.4937 - val_loss: 1.0023 - val_accuracy: 0.5844
Epoch 3/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.9595 - accuracy: 0.6029 - val_loss: 0.8268 - val_accuracy: 0.6698
Epoch 4/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.9046 - accuracy: 0.6324 - val_loss: 0.8080 - val_accuracy: 0.6908
Epoch 5/5
1719/1719 [==============================] - 32s 19ms/step - loss: 0.8454 - accuracy: 0.6642 - val_loss: 0.7522 - val_accuracy: 0.7180
###Markdown
좋지 않군요. 그레이디언트 폭주나 소실 문제가 발생한 것입니다. 배치 정규화
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8750 - accuracy: 0.7123 - val_loss: 0.5525 - val_accuracy: 0.8228
Epoch 2/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5753 - accuracy: 0.8031 - val_loss: 0.4724 - val_accuracy: 0.8476
Epoch 3/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5189 - accuracy: 0.8205 - val_loss: 0.4375 - val_accuracy: 0.8546
Epoch 4/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4827 - accuracy: 0.8322 - val_loss: 0.4152 - val_accuracy: 0.8594
Epoch 5/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4565 - accuracy: 0.8408 - val_loss: 0.3997 - val_accuracy: 0.8636
Epoch 6/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4398 - accuracy: 0.8472 - val_loss: 0.3867 - val_accuracy: 0.8700
Epoch 7/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4242 - accuracy: 0.8511 - val_loss: 0.3762 - val_accuracy: 0.8706
Epoch 8/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4144 - accuracy: 0.8541 - val_loss: 0.3710 - val_accuracy: 0.8736
Epoch 9/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4024 - accuracy: 0.8581 - val_loss: 0.3630 - val_accuracy: 0.8756
Epoch 10/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.3915 - accuracy: 0.8623 - val_loss: 0.3572 - val_accuracy: 0.8754
###Markdown
이따금 활성화 함수전에 BN을 적용해도 잘 동작합니다(여기에는 논란의 여지가 있습니다). 또한 `BatchNormalization` 층 이전의 층은 편향을 위한 항이 필요 없습니다. `BatchNormalization` 층이 이를 무효화하기 때문입니다. 따라서 필요 없는 파라미터이므로 `use_bias=False`를 지정하여 층을 만들 수 있습니다:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 8s 5ms/step - loss: 1.0317 - accuracy: 0.6757 - val_loss: 0.6767 - val_accuracy: 0.7816
Epoch 2/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.6790 - accuracy: 0.7792 - val_loss: 0.5566 - val_accuracy: 0.8180
Epoch 3/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5960 - accuracy: 0.8037 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5447 - accuracy: 0.8192 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.5109 - accuracy: 0.8279 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4898 - accuracy: 0.8336 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4712 - accuracy: 0.8397 - val_loss: 0.4130 - val_accuracy: 0.8572
Epoch 8/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4560 - accuracy: 0.8441 - val_loss: 0.4035 - val_accuracy: 0.8606
Epoch 9/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4441 - accuracy: 0.8473 - val_loss: 0.3943 - val_accuracy: 0.8642
Epoch 10/10
1719/1719 [==============================] - 8s 5ms/step - loss: 0.4332 - accuracy: 0.8505 - val_loss: 0.3874 - val_accuracy: 0.8662
###Markdown
그레이디언트 클리핑 모든 케라스 옵티마이저는 `clipnorm`이나 `clipvalue` 매개변수를 지원합니다:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
사전 훈련된 층 재사용하기 케라스 모델 재사용하기 패션 MNIST 훈련 세트를 두 개로 나누어 보죠:* `X_train_A`: 샌달과 셔츠(클래스 5와 6)을 제외한 모든 이미지* `X_train_B`: 샌달과 셔츠 이미지 중 처음 200개만 가진 작은 훈련 세트검증 세트와 테스트 세트도 이렇게 나눕니다. 하지만 이미지 개수는 제한하지 않습니다.A 세트(8개의 클래스를 가진 분류 문제)에서 모델을 훈련하고 이를 재사용하여 B 세트(이진 분류)를 해결해 보겠습니다. A 작업에서 B 작업으로 약간의 지식이 전달되기를 기대합니다. 왜냐하면 A 세트의 클래스(스니커즈, 앵클 부츠, 코트, 티셔츠 등)가 B 세트에 있는 클래스(샌달과 셔츠)와 조금 비슷하기 때문입니다. 하지만 `Dense` 층을 사용하기 때문에 동일한 위치에 나타난 패턴만 재사용할 수 있습니다(반대로 합성곱 층은 훨씬 많은 정보를 전송합니다. 학습한 패턴을 이미지의 어느 위치에서나 감지할 수 있기 때문입니다. CNN 장에서 자세히 알아 보겠습니다).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 39ms/step - loss: 0.5803 - accuracy: 0.6500 - val_loss: 0.5842 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 16ms/step - loss: 0.5436 - accuracy: 0.6800 - val_loss: 0.5466 - val_accuracy: 0.6724
Epoch 3/4
7/7 [==============================] - 0s 16ms/step - loss: 0.5066 - accuracy: 0.7300 - val_loss: 0.5144 - val_accuracy: 0.7099
Epoch 4/4
7/7 [==============================] - 0s 16ms/step - loss: 0.4749 - accuracy: 0.7500 - val_loss: 0.4855 - val_accuracy: 0.7312
Epoch 1/16
7/7 [==============================] - 0s 41ms/step - loss: 0.3964 - accuracy: 0.8100 - val_loss: 0.3461 - val_accuracy: 0.8631
Epoch 2/16
7/7 [==============================] - 0s 15ms/step - loss: 0.2799 - accuracy: 0.9350 - val_loss: 0.2603 - val_accuracy: 0.9260
Epoch 3/16
7/7 [==============================] - 0s 16ms/step - loss: 0.2083 - accuracy: 0.9650 - val_loss: 0.2110 - val_accuracy: 0.9544
Epoch 4/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1670 - accuracy: 0.9800 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 18ms/step - loss: 0.1397 - accuracy: 0.9800 - val_loss: 0.1562 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1198 - accuracy: 0.9950 - val_loss: 0.1394 - val_accuracy: 0.9807
Epoch 7/16
7/7 [==============================] - 0s 16ms/step - loss: 0.1051 - accuracy: 0.9950 - val_loss: 0.1267 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 16ms/step - loss: 0.0938 - accuracy: 0.9950 - val_loss: 0.1164 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0848 - accuracy: 1.0000 - val_loss: 0.1067 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 16ms/step - loss: 0.0763 - accuracy: 1.0000 - val_loss: 0.1001 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0705 - accuracy: 1.0000 - val_loss: 0.0941 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0650 - accuracy: 1.0000 - val_loss: 0.0889 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 17ms/step - loss: 0.0603 - accuracy: 1.0000 - val_loss: 0.0840 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0560 - accuracy: 1.0000 - val_loss: 0.0804 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0526 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0497 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
마지막 점수는 어떤가요?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 2ms/step - loss: 0.0683 - accuracy: 0.9930
###Markdown
훌륭하네요! 꽤 많은 정보를 전달했습니다: 오차율이 4배나 줄었네요!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
고속 옵티마이저 모멘텀 옵티마이저
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
네스테로프 가속 경사
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
tf.config.experimental.set_memory_growth(tf.config.experimental.list_physical_devices('GPU')[0], True)
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
leaky_relu = keras.layers.LeakyReLU(alpha=0.2)
layer = keras.layers.Dense(10, activation=leaky_relu)
layer.activation
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation=leaky_relu),
keras.layers.Dense(100, activation=leaky_relu),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 64us/sample - loss: 1.3979 - accuracy: 0.5948 - val_loss: 0.9369 - val_accuracy: 0.7162
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.8333 - accuracy: 0.7341 - val_loss: 0.7392 - val_accuracy: 0.7638
Epoch 3/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.7068 - accuracy: 0.7711 - val_loss: 0.6561 - val_accuracy: 0.7906
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6417 - accuracy: 0.7889 - val_loss: 0.6052 - val_accuracy: 0.8088
Epoch 5/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5988 - accuracy: 0.8019 - val_loss: 0.5716 - val_accuracy: 0.8166
Epoch 6/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.5686 - accuracy: 0.8118 - val_loss: 0.5465 - val_accuracy: 0.8234
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5460 - accuracy: 0.8181 - val_loss: 0.5273 - val_accuracy: 0.8314
Epoch 8/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5281 - accuracy: 0.8229 - val_loss: 0.5108 - val_accuracy: 0.8370
Epoch 9/10
55000/55000 [==============================] - 3s 60us/sample - loss: 0.5137 - accuracy: 0.8261 - val_loss: 0.4985 - val_accuracy: 0.8398
Epoch 10/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.5018 - accuracy: 0.8289 - val_loss: 0.4901 - val_accuracy: 0.8382
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 30us/sample - loss: 0.4926 - accuracy: 0.8268 - val_loss: 0.4229 - val_accuracy: 0.8520
Epoch 2/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.3754 - accuracy: 0.8669 - val_loss: 0.3833 - val_accuracy: 0.8634
Epoch 3/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3433 - accuracy: 0.8776 - val_loss: 0.3687 - val_accuracy: 0.8666
Epoch 4/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.3198 - accuracy: 0.8854 - val_loss: 0.3595 - val_accuracy: 0.8738
Epoch 5/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3011 - accuracy: 0.8920 - val_loss: 0.3421 - val_accuracy: 0.8764
Epoch 6/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2873 - accuracy: 0.8973 - val_loss: 0.3371 - val_accuracy: 0.8814
Epoch 7/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2738 - accuracy: 0.9026 - val_loss: 0.3312 - val_accuracy: 0.8842
Epoch 8/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2633 - accuracy: 0.9071 - val_loss: 0.3338 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2543 - accuracy: 0.9098 - val_loss: 0.3296 - val_accuracy: 0.8840
Epoch 10/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2465 - accuracy: 0.9125 - val_loss: 0.3233 - val_accuracy: 0.8874
Epoch 11/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2406 - accuracy: 0.9157 - val_loss: 0.3215 - val_accuracy: 0.8874
Epoch 12/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9173 - val_loss: 0.3237 - val_accuracy: 0.8862
Epoch 13/25
55000/55000 [==============================] - 2s 27us/sample - loss: 0.2370 - accuracy: 0.9160 - val_loss: 0.3282 - val_accuracy: 0.8856
Epoch 14/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9157 - val_loss: 0.3228 - val_accuracy: 0.8874
Epoch 15/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2362 - accuracy: 0.9162 - val_loss: 0.3261 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2339 - accuracy: 0.9167 - val_loss: 0.3336 - val_accuracy: 0.8830
Epoch 17/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2319 - accuracy: 0.9166 - val_loss: 0.3316 - val_accuracy: 0.8818
Epoch 18/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2295 - accuracy: 0.9181 - val_loss: 0.3424 - val_accuracy: 0.8786
Epoch 19/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2266 - accuracy: 0.9186 - val_loss: 0.3356 - val_accuracy: 0.8844
Epoch 20/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2250 - accuracy: 0.9186 - val_loss: 0.3486 - val_accuracy: 0.8758
Epoch 21/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2221 - accuracy: 0.9189 - val_loss: 0.3443 - val_accuracy: 0.8856
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2184 - accuracy: 0.9201 - val_loss: 0.3889 - val_accuracy: 0.8700
Epoch 23/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2040 - accuracy: 0.9266 - val_loss: 0.3216 - val_accuracy: 0.8910
Epoch 24/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1750 - accuracy: 0.9401 - val_loss: 0.3153 - val_accuracy: 0.8932
Epoch 25/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1718 - accuracy: 0.9416 - val_loss: 0.3153 - val_accuracy: 0.8940
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
%load_ext tensorboard
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 2 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 1.5767 - accuracy: 0.1326
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**11장 – 심층 신경망 훈련하기** _이 노트북은 11장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.
###Code
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 텐서플로 ≥2.0 필수
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
그레이디언트 소실과 폭주 문제
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
Setup
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
###Output
The tensorboard extension is already loaded. To reload it, use:
%reload_ext tensorboard
###Markdown
Pretrained Layers Reusage Data
###Code
def split_dataset(X, y):
"""
"""
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
X, y = dict(), dict()
X_A, y_A = dict(), dict()
X_B, y_B = dict(), dict()
(X['train'], y['train']), (X['test'], y['test']) = keras.datasets.fashion_mnist.load_data()
X['train'] = X['train']/255.0
X['test'] = X['test']/255.0
(X_A['train'], y_A['train']), (X_B['train'], y_B['train']) = split_dataset(X['train'],y['train'])
(X_A['test'], y_A['test']), (X_B['test'], y_B['test']) = split_dataset(X['test'],y['test'])
X_B['train'] = X_B['train'][:200]
y_B['train'] = y_B['train'][:200]
###Output
_____no_output_____
###Markdown
Model A (for 8 classes)
###Code
def create_model_A():
model_A = keras.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28,28]))
for n_hidden in (300,100,50,50,50):
model_A.add(keras.layers.Dense(n_hidden, activation='selu', use_bias=False))
model_A.add(keras.layers.BatchNormalization())
model_A.add(keras.layers.Dense(8, activation = 'softmax'))
return model_A
model_A = create_model_A()
model_A.compile(loss='sparse_categorical_crossentropy',
optimizer = keras.optimizers.Adam(learning_rate=1e-2),
metrics = ['accuracy'])
model_A.summary()
history = model_A.fit(X_A['train'],y_A['train'],
validation_split=0.3,
epochs = 100,
callbacks = [keras.callbacks.EarlyStopping(patience=10)])
model_A.save("my_model_A.h5")
###Output
_____no_output_____
###Markdown
Training model for binary classsification (model B)
###Code
model_B = create_model_A()
model_B.compile(loss='sparse_categorical_crossentropy',
optimizer = keras.optimizers.Adam(learning_rate=1e-2),
metrics = ['accuracy'])
history = model_A.fit(X_A['train'],y_A['train'],
validation_split=0.3,
epochs = 100,
callbacks = [keras.callbacks.EarlyStopping(patience=10)])
###Output
Epoch 1/100
1050/1050 [==============================] - 8s 7ms/step - loss: 0.1343 - accuracy: 0.9511 - val_loss: 0.2718 - val_accuracy: 0.9228
Epoch 2/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1299 - accuracy: 0.9529 - val_loss: 0.2616 - val_accuracy: 0.9258
Epoch 3/100
1050/1050 [==============================] - 8s 7ms/step - loss: 0.1280 - accuracy: 0.9538 - val_loss: 0.2616 - val_accuracy: 0.9287
Epoch 4/100
1050/1050 [==============================] - 8s 7ms/step - loss: 0.1244 - accuracy: 0.9544 - val_loss: 0.2989 - val_accuracy: 0.9196
Epoch 5/100
1050/1050 [==============================] - 8s 7ms/step - loss: 0.1241 - accuracy: 0.9546 - val_loss: 0.3452 - val_accuracy: 0.9228
Epoch 6/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1189 - accuracy: 0.9564 - val_loss: 0.2738 - val_accuracy: 0.9233
Epoch 7/100
1050/1050 [==============================] - 8s 7ms/step - loss: 0.1227 - accuracy: 0.9543 - val_loss: 0.2750 - val_accuracy: 0.9158
Epoch 8/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1240 - accuracy: 0.9548 - val_loss: 0.3131 - val_accuracy: 0.9183
Epoch 9/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1206 - accuracy: 0.9557 - val_loss: 0.2963 - val_accuracy: 0.9258
Epoch 10/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1143 - accuracy: 0.9586 - val_loss: 0.3108 - val_accuracy: 0.9272
Epoch 11/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1129 - accuracy: 0.9591 - val_loss: 0.2631 - val_accuracy: 0.9281
Epoch 12/100
1050/1050 [==============================] - 7s 7ms/step - loss: 0.1127 - accuracy: 0.9576 - val_loss: 0.2911 - val_accuracy: 0.9218
###Markdown
Reusing A's weigths
###Code
transfer_A_model = keras.Sequential(
keras.models.load_model('my_model_A.h5').layers[:-1]
) # all layers excluding output
for layer in transfer_A_model.layers:
layer.trainable = False
transfer_A_model.add(keras.layers.Dense(1,activation='sigmoid'))
transfer_A_model.compile(loss='binary_crossentropy',
optimizer = keras.optimizers.Adam(learning_rate=1e-2),
metrics = ['accuracy'])
transfer_A_model.summary()
###Output
Model: "sequential_23"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_19 (Flatten) (None, 784) 0
dense_45 (Dense) (None, 300) 235200
batch_normalization_18 (Bat (None, 300) 1200
chNormalization)
dense_46 (Dense) (None, 100) 30000
batch_normalization_19 (Bat (None, 100) 400
chNormalization)
dense_47 (Dense) (None, 50) 5000
batch_normalization_20 (Bat (None, 50) 200
chNormalization)
dense_48 (Dense) (None, 50) 2500
batch_normalization_21 (Bat (None, 50) 200
chNormalization)
dense_49 (Dense) (None, 50) 2500
batch_normalization_22 (Bat (None, 50) 200
chNormalization)
dense_70 (Dense) (None, 1) 51
=================================================================
Total params: 277,451
Trainable params: 51
Non-trainable params: 277,400
_________________________________________________________________
###Markdown
Note that `transfer_A_model` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `transfer_A_model` on top of a clone of `model_A`:```>> model_A = keras.models.load_model("my_model_A.h5")>> model_A_clone = keras.models.clone_model(model_A)>> model_A_clone.set_weights(model_A.get_weights())```
###Code
transfer_A_model.fit(X_B['train'],y_B['train'],
validation_split = 0.3,
epochs=100,
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 6s 3ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 6s 3ms/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
967/1719 [===============>..............] - ETA: 2s - loss: 0.5664 - accuracy: 0.8134
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
_____no_output_____
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
_____no_output_____
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
_____no_output_____
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
_____no_output_____
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
_____no_output_____
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
_____no_output_____
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
_____no_output_____
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
_____no_output_____
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
_____no_output_____
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
_____no_output_____
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
_____no_output_____
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
_____no_output_____
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
_____no_output_____
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
_____no_output_____
###Markdown
One cycle allowed us to train the model in just 15 epochs, each taking only 2 seconds (thanks to the larger batch size). This is several times faster than the fastest model we trained so far. Moreover, we improved the model's performance (from 47.6% to 52.0%). The batch normalized model reaches a slightly better performance (54%), but it's much slower to train.
###Code
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import pandas as pd
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
print(tf.__version__)
# import kerastuner as kt
# print(kt.__version__)
###Output
1.0.1
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
keras.layers.Dense(10, activation="relu", kernel_initializer="he_uniform")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2819 - accuracy: 0.6229 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7955 - accuracy: 0.7362 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6816 - accuracy: 0.7721 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.6217 - accuracy: 0.7944 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.5832 - accuracy: 0.8075 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5553 - accuracy: 0.8156 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5338 - accuracy: 0.8224 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5173 - accuracy: 0.8272 - val_loss: 0.5079 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5040 - accuracy: 0.8291 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4924 - accuracy: 0.8321 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
[m for m in dir(keras.activations) if not m.startswith('_')]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
callbac)
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.3461 - accuracy: 0.6209 - val_loss: 0.9255 - val_accuracy: 0.7184
Epoch 2/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.8197 - accuracy: 0.7355 - val_loss: 0.7305 - val_accuracy: 0.7632
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6966 - accuracy: 0.7694 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.6331 - accuracy: 0.7910 - val_loss: 0.6003 - val_accuracy: 0.8046
Epoch 5/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5917 - accuracy: 0.8057 - val_loss: 0.5656 - val_accuracy: 0.8184
Epoch 6/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5618 - accuracy: 0.8135 - val_loss: 0.5406 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5390 - accuracy: 0.8205 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5213 - accuracy: 0.8258 - val_loss: 0.5113 - val_accuracy: 0.8314
Epoch 9/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5070 - accuracy: 0.8288 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4945 - accuracy: 0.8315 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
print([m for m in dir(keras.layers) if not m.startswith('_')])
###Output
['AbstractRNNCell', 'Activation', 'ActivityRegularization', 'Add', 'AdditiveAttention', 'AlphaDropout', 'Attention', 'Average', 'AveragePooling1D', 'AveragePooling2D', 'AveragePooling3D', 'AvgPool1D', 'AvgPool2D', 'AvgPool3D', 'BatchNormalization', 'Bidirectional', 'Concatenate', 'Conv1D', 'Conv1DTranspose', 'Conv2D', 'Conv2DTranspose', 'Conv3D', 'Conv3DTranspose', 'ConvLSTM2D', 'Convolution1D', 'Convolution1DTranspose', 'Convolution2D', 'Convolution2DTranspose', 'Convolution3D', 'Convolution3DTranspose', 'Cropping1D', 'Cropping2D', 'Cropping3D', 'Dense', 'DenseFeatures', 'DepthwiseConv2D', 'Dot', 'Dropout', 'ELU', 'Embedding', 'Flatten', 'GRU', 'GRUCell', 'GaussianDropout', 'GaussianNoise', 'GlobalAveragePooling1D', 'GlobalAveragePooling2D', 'GlobalAveragePooling3D', 'GlobalAvgPool1D', 'GlobalAvgPool2D', 'GlobalAvgPool3D', 'GlobalMaxPool1D', 'GlobalMaxPool2D', 'GlobalMaxPool3D', 'GlobalMaxPooling1D', 'GlobalMaxPooling2D', 'GlobalMaxPooling3D', 'Input', 'InputLayer', 'InputSpec', 'LSTM', 'LSTMCell', 'Lambda', 'Layer', 'LayerNormalization', 'LeakyReLU', 'LocallyConnected1D', 'LocallyConnected2D', 'Masking', 'MaxPool1D', 'MaxPool2D', 'MaxPool3D', 'MaxPooling1D', 'MaxPooling2D', 'MaxPooling3D', 'Maximum', 'Minimum', 'Multiply', 'PReLU', 'Permute', 'RNN', 'ReLU', 'RepeatVector', 'Reshape', 'SeparableConv1D', 'SeparableConv2D', 'SeparableConvolution1D', 'SeparableConvolution2D', 'SimpleRNN', 'SimpleRNNCell', 'Softmax', 'SpatialDropout1D', 'SpatialDropout2D', 'SpatialDropout3D', 'StackedRNNCells', 'Subtract', 'ThresholdedReLU', 'TimeDistributed', 'UpSampling1D', 'UpSampling2D', 'UpSampling3D', 'Wrapper', 'ZeroPadding1D', 'ZeroPadding2D', 'ZeroPadding3D', 'add', 'average', 'concatenate', 'deserialize', 'dot', 'experimental', 'maximum', 'minimum', 'multiply', 'serialize', 'subtract']
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
tmp = np.array([[1,2,3],[3,4,5]])
print(tmp.mean(axis=0, keepdims=True).shape)
print(tmp.mean(axis=0, keepdims=False).shape)
tmp.mean(axis=0, keepdims=True)
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 14s 8ms/step - loss: 1.2829 - accuracy: 0.4983 - val_loss: 0.9338 - val_accuracy: 0.6178
Epoch 2/5
1719/1719 [==============================] - 13s 8ms/step - loss: 0.7986 - accuracy: 0.6979 - val_loss: 0.6783 - val_accuracy: 0.7522
Epoch 3/5
1719/1719 [==============================] - 14s 8ms/step - loss: 0.6677 - accuracy: 0.7569 - val_loss: 0.6068 - val_accuracy: 0.7758
Epoch 4/5
1719/1719 [==============================] - 14s 8ms/step - loss: 0.5748 - accuracy: 0.7895 - val_loss: 0.5438 - val_accuracy: 0.7980
Epoch 5/5
1719/1719 [==============================] - 14s 8ms/step - loss: 0.5289 - accuracy: 0.8073 - val_loss: 0.5301 - val_accuracy: 0.8112
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 14s 8ms/step - loss: 1.8180 - accuracy: 0.2561 - val_loss: 1.3865 - val_accuracy: 0.3860
Epoch 2/5
1719/1719 [==============================] - 13s 8ms/step - loss: 1.1887 - accuracy: 0.4919 - val_loss: 0.8971 - val_accuracy: 0.6212
Epoch 3/5
1719/1719 [==============================] - 13s 7ms/step - loss: 0.9724 - accuracy: 0.6007 - val_loss: 1.0007 - val_accuracy: 0.5332
Epoch 4/5
1719/1719 [==============================] - 14s 8ms/step - loss: 0.8620 - accuracy: 0.6574 - val_loss: 0.7816 - val_accuracy: 0.7106
Epoch 5/5
1719/1719 [==============================] - 14s 8ms/step - loss: 0.7953 - accuracy: 0.6901 - val_loss: 0.6948 - val_accuracy: 0.7308
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization BN after activation and BN right after Input layer to standardize input.
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.8750 - accuracy: 0.7124 - val_loss: 0.5525 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5753 - accuracy: 0.8029 - val_loss: 0.4724 - val_accuracy: 0.8472
Epoch 3/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5189 - accuracy: 0.8206 - val_loss: 0.4375 - val_accuracy: 0.8554
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4827 - accuracy: 0.8323 - val_loss: 0.4151 - val_accuracy: 0.8594
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4565 - accuracy: 0.8407 - val_loss: 0.3997 - val_accuracy: 0.8636
Epoch 6/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4397 - accuracy: 0.8474 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4242 - accuracy: 0.8513 - val_loss: 0.3763 - val_accuracy: 0.8708
Epoch 8/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4143 - accuracy: 0.8541 - val_loss: 0.3712 - val_accuracy: 0.8738
Epoch 9/10
1719/1719 [==============================] - 4s 3ms/step - loss: 0.4023 - accuracy: 0.8580 - val_loss: 0.3630 - val_accuracy: 0.8748
Epoch 10/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3914 - accuracy: 0.8626 - val_loss: 0.3571 - val_accuracy: 0.8758
###Markdown
BN after activation with pre-standardize input
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
# keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=10,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.8735 - accuracy: 0.7135 - val_loss: 0.5816 - val_accuracy: 0.8076
Epoch 2/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5761 - accuracy: 0.8026 - val_loss: 0.4895 - val_accuracy: 0.8366
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5144 - accuracy: 0.8225 - val_loss: 0.4513 - val_accuracy: 0.8502
Epoch 4/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4753 - accuracy: 0.8356 - val_loss: 0.4273 - val_accuracy: 0.8594
Epoch 5/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4484 - accuracy: 0.8441 - val_loss: 0.4101 - val_accuracy: 0.8626
Epoch 6/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4308 - accuracy: 0.8493 - val_loss: 0.3993 - val_accuracy: 0.8652
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4155 - accuracy: 0.8551 - val_loss: 0.3894 - val_accuracy: 0.8658
Epoch 8/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4027 - accuracy: 0.8603 - val_loss: 0.3834 - val_accuracy: 0.8690
Epoch 9/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3931 - accuracy: 0.8618 - val_loss: 0.3789 - val_accuracy: 0.8702
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3805 - accuracy: 0.8682 - val_loss: 0.3702 - val_accuracy: 0.8710
###Markdown
BN before activation Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 4s 2ms/step - loss: 1.0347 - accuracy: 0.6824 - val_loss: 0.6709 - val_accuracy: 0.7908
Epoch 2/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.6714 - accuracy: 0.7843 - val_loss: 0.5484 - val_accuracy: 0.8188
Epoch 3/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5900 - accuracy: 0.8048 - val_loss: 0.4936 - val_accuracy: 0.8332
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5395 - accuracy: 0.8184 - val_loss: 0.4611 - val_accuracy: 0.8438
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5065 - accuracy: 0.8274 - val_loss: 0.4382 - val_accuracy: 0.8504
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4853 - accuracy: 0.8332 - val_loss: 0.4212 - val_accuracy: 0.8554
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4696 - accuracy: 0.8388 - val_loss: 0.4086 - val_accuracy: 0.8580
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4549 - accuracy: 0.8420 - val_loss: 0.3981 - val_accuracy: 0.8630
Epoch 9/10
1719/1719 [==============================] - 4s 3ms/step - loss: 0.4413 - accuracy: 0.8478 - val_loss: 0.3894 - val_accuracy: 0.8642
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4309 - accuracy: 0.8514 - val_loss: 0.3808 - val_accuracy: 0.8658
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save(os.path.join("my_models", "my_model_A.h5"))
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_A = keras.models.load_model("my_models/my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 22ms/step - loss: 0.5775 - accuracy: 0.6500 - val_loss: 0.5817 - val_accuracy: 0.6400
Epoch 2/4
7/7 [==============================] - 0s 7ms/step - loss: 0.5411 - accuracy: 0.6700 - val_loss: 0.5444 - val_accuracy: 0.6815
Epoch 3/4
7/7 [==============================] - 0s 7ms/step - loss: 0.5045 - accuracy: 0.7300 - val_loss: 0.5125 - val_accuracy: 0.7099
Epoch 4/4
7/7 [==============================] - 0s 6ms/step - loss: 0.4731 - accuracy: 0.7500 - val_loss: 0.4839 - val_accuracy: 0.7363
Epoch 1/16
7/7 [==============================] - 0s 21ms/step - loss: 0.3950 - accuracy: 0.8200 - val_loss: 0.3452 - val_accuracy: 0.8671
Epoch 2/16
7/7 [==============================] - 0s 7ms/step - loss: 0.2793 - accuracy: 0.9350 - val_loss: 0.2599 - val_accuracy: 0.9290
Epoch 3/16
7/7 [==============================] - 0s 7ms/step - loss: 0.2080 - accuracy: 0.9650 - val_loss: 0.2108 - val_accuracy: 0.9544
Epoch 4/16
7/7 [==============================] - 0s 7ms/step - loss: 0.1668 - accuracy: 0.9800 - val_loss: 0.1789 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 7ms/step - loss: 0.1395 - accuracy: 0.9800 - val_loss: 0.1560 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 7ms/step - loss: 0.1196 - accuracy: 0.9950 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 7ms/step - loss: 0.1049 - accuracy: 0.9950 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 7ms/step - loss: 0.0937 - accuracy: 0.9950 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0847 - accuracy: 1.0000 - val_loss: 0.1066 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0762 - accuracy: 1.0000 - val_loss: 0.1000 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 8ms/step - loss: 0.0704 - accuracy: 1.0000 - val_loss: 0.0940 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 7ms/step - loss: 0.0649 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 6ms/step - loss: 0.0602 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 7ms/step - loss: 0.0559 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 7ms/step - loss: 0.0525 - accuracy: 1.0000 - val_loss: 0.0769 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 7ms/step - loss: 0.0496 - accuracy: 1.0000 - val_loss: 0.0739 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 790us/step - loss: 0.0683 - accuracy: 0.9930
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "go-", label='Exponential Scheduling')
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-", label='Power Scheduling')
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Learning Rate Scheduling (per epoch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration (i.e. mini batch) rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps (mini-batches) in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4893 - accuracy: 0.8275 - val_loss: 0.4095 - val_accuracy: 0.8602
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3819 - accuracy: 0.8652 - val_loss: 0.3739 - val_accuracy: 0.8684
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3486 - accuracy: 0.8767 - val_loss: 0.3736 - val_accuracy: 0.8680
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3264 - accuracy: 0.8835 - val_loss: 0.3492 - val_accuracy: 0.8802
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3104 - accuracy: 0.8895 - val_loss: 0.3428 - val_accuracy: 0.8800
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2958 - accuracy: 0.8953 - val_loss: 0.3411 - val_accuracy: 0.8816
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8989 - val_loss: 0.3351 - val_accuracy: 0.8816
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2761 - accuracy: 0.9018 - val_loss: 0.3361 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2678 - accuracy: 0.9052 - val_loss: 0.3262 - val_accuracy: 0.8854
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2608 - accuracy: 0.9068 - val_loss: 0.3237 - val_accuracy: 0.8848
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2551 - accuracy: 0.9089 - val_loss: 0.3247 - val_accuracy: 0.8868
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2497 - accuracy: 0.9125 - val_loss: 0.3296 - val_accuracy: 0.8822
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2450 - accuracy: 0.9139 - val_loss: 0.3216 - val_accuracy: 0.8878
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2416 - accuracy: 0.9148 - val_loss: 0.3219 - val_accuracy: 0.8858
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2376 - accuracy: 0.9169 - val_loss: 0.3205 - val_accuracy: 0.8870
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2344 - accuracy: 0.9180 - val_loss: 0.3181 - val_accuracy: 0.8886
Epoch 17/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2317 - accuracy: 0.9186 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2292 - accuracy: 0.9196 - val_loss: 0.3166 - val_accuracy: 0.8902
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2270 - accuracy: 0.9209 - val_loss: 0.3194 - val_accuracy: 0.8886
Epoch 20/25
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2251 - accuracy: 0.9219 - val_loss: 0.3166 - val_accuracy: 0.8910
Epoch 21/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2229 - accuracy: 0.9223 - val_loss: 0.3176 - val_accuracy: 0.8908
Epoch 22/25
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2216 - accuracy: 0.9223 - val_loss: 0.3161 - val_accuracy: 0.8910
Epoch 23/25
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2202 - accuracy: 0.9232 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 24/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2188 - accuracy: 0.9239 - val_loss: 0.3164 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2180 - accuracy: 0.9242 - val_loss: 0.3162 - val_accuracy: 0.8910
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 3ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4581 - accuracy: 0.8396 - val_loss: 0.4274 - val_accuracy: 0.8522
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8547 - val_loss: 0.4115 - val_accuracy: 0.8584
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8639 - val_loss: 0.3868 - val_accuracy: 0.8686
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8717 - val_loss: 0.3766 - val_accuracy: 0.8682
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3741 - val_accuracy: 0.8710
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8810 - val_loss: 0.3634 - val_accuracy: 0.8710
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3964 - val_accuracy: 0.8608
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3064 - accuracy: 0.8889 - val_loss: 0.3489 - val_accuracy: 0.8754
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2944 - accuracy: 0.8928 - val_loss: 0.3398 - val_accuracy: 0.8802
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8964 - val_loss: 0.3462 - val_accuracy: 0.8820
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9025 - val_loss: 0.3642 - val_accuracy: 0.8700
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2535 - accuracy: 0.9085 - val_loss: 0.3352 - val_accuracy: 0.8838
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2404 - accuracy: 0.9133 - val_loss: 0.3457 - val_accuracy: 0.8818
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2278 - accuracy: 0.9184 - val_loss: 0.3260 - val_accuracy: 0.8848
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2158 - accuracy: 0.9234 - val_loss: 0.3297 - val_accuracy: 0.8830
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2061 - accuracy: 0.9263 - val_loss: 0.3342 - val_accuracy: 0.8888
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1977 - accuracy: 0.9303 - val_loss: 0.3235 - val_accuracy: 0.8894
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1891 - accuracy: 0.9339 - val_loss: 0.3228 - val_accuracy: 0.8914
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1820 - accuracy: 0.9368 - val_loss: 0.3221 - val_accuracy: 0.8926
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1751 - accuracy: 0.9402 - val_loss: 0.3216 - val_accuracy: 0.8912
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9419 - val_loss: 0.3180 - val_accuracy: 0.8952
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1654 - accuracy: 0.9439 - val_loss: 0.3185 - val_accuracy: 0.8944
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1626 - accuracy: 0.9455 - val_loss: 0.3176 - val_accuracy: 0.8934
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9465 - val_loss: 0.3169 - val_accuracy: 0.8946
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 1.6313 - accuracy: 0.8113 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7187 - accuracy: 0.8273 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.5838 - accuracy: 0.7997 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.4209 - accuracy: 0.8442 - val_loss: 0.3396 - val_accuracy: 0.8720
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_valid_scaled, y_valid)
model.evaluate(X_test_scaled, y_test)
###Output
313/313 [==============================] - 0s 859us/step - loss: 0.4354 - accuracy: 0.8693
###Markdown
With dropout on, the training loss during training does not represent the true training loss. The actual training loss needs to be calculated after training when dropout is off.
###Code
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4163 - accuracy: 0.8456
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
model(X_test_scaled[:1,])
model(X_test_scaled[:1,], training=True)
np.round(model.predict(X_test_scaled[:1]), 2)
y_probas.shape, y_proba.shape
y_test.shape
np.round(y_probas[:, :1], 2)[:5]
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 4ms/step - loss: 0.4749 - accuracy: 0.8333 - val_loss: 0.3697 - val_accuracy: 0.8646
Epoch 2/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3541 - accuracy: 0.8710 - val_loss: 0.3827 - val_accuracy: 0.8674
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.* Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbac
ks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
Xavier 초기화와 He 초기화
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
수렴하지 않는 활성화 함수 LeakyReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
LeakyReLU를 사용해 패션 MNIST에서 신경망을 훈련해 보죠:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.2819 - accuracy: 0.6229 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.7955 - accuracy: 0.7362 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6816 - accuracy: 0.7721 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6217 - accuracy: 0.7944 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5832 - accuracy: 0.8075 - val_loss: 0.5582 - val_accuracy: 0.8202
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5553 - accuracy: 0.8157 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5338 - accuracy: 0.8224 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5172 - accuracy: 0.8273 - val_loss: 0.5079 - val_accuracy: 0.8286
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5040 - accuracy: 0.8289 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4924 - accuracy: 0.8321 - val_loss: 0.4817 - val_accuracy: 0.8394
###Markdown
PReLU를 테스트해 보죠:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 2ms/step - loss: 1.3461 - accuracy: 0.6209 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 3s 1ms/step - loss: 0.8197 - accuracy: 0.7355 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6966 - accuracy: 0.7694 - val_loss: 0.6565 - val_accuracy: 0.7880
Epoch 4/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.6331 - accuracy: 0.7909 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5917 - accuracy: 0.8057 - val_loss: 0.5656 - val_accuracy: 0.8178
Epoch 6/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5618 - accuracy: 0.8135 - val_loss: 0.5406 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5390 - accuracy: 0.8205 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5213 - accuracy: 0.8257 - val_loss: 0.5113 - val_accuracy: 0.8314
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5070 - accuracy: 0.8289 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4945 - accuracy: 0.8315 - val_loss: 0.4826 - val_accuracy: 0.8396
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
텐서플로에서 쉽게 ELU를 적용할 수 있습니다. 층을 만들 때 활성화 함수로 지정하면 됩니다:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU Günter Klambauer, Thomas Unterthiner, Andreas Mayr는 2017년 한 [훌륭한 논문](https://arxiv.org/pdf/1706.02515.pdf)에서 SELU 활성화 함수를 소개했습니다. 훈련하는 동안 완전 연결 층만 쌓아서 신경망을 만들고 SELU 활성화 함수와 LeCun 초기화를 사용한다면 자기 정규화됩니다. 각 층의 출력이 평균과표준편차를 보존하는 경향이 있습니다. 이는 그레이디언트 소실과 폭주 문제를 막아줍니다. 그 결과로 SELU 활성화 함수는 이런 종류의 네트워크(특히 아주 깊은 네트워크)에서 다른 활성화 함수보다 뛰어난 성능을 종종 냅니다. 따라서 꼭 시도해 봐야 합니다. 하지만 SELU 활성화 함수의 자기 정규화 특징은 쉽게 깨집니다. ℓ1나 ℓ2 정규화, 드롭아웃, 맥스 노름, 스킵 연결이나 시퀀셜하지 않은 다른 토폴로지를 사용할 수 없습니다(즉 순환 신경망은 자기 정규화되지 않습니다). 하지만 실전에서 시퀀셜 CNN과 잘 동작합니다. 자기 정규화가 깨지면 SELU가 다른 활성화 함수보다 더 나은 성능을 내지 않을 것입니다.
###Code
from scipy.special import erfc
# alpha와 scale은 평균 0과 표준 편차 1로 자기 정규화합니다
# (논문에 있는 식 14 참조):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
기본적으로 SELU 하이퍼파라미터(`scale`과 `alpha`)는 각 뉴런의 평균 출력이 0에 가깝고 표준 편차는 1에 가깝도록 조정됩니다(입력은 평균이 0이고 표준 편차 1로 표준화되었다고 가정합니다). 이 활성화 함수를 사용하면 1,000개의 층이 있는 심층 신경망도 모든 층에 걸쳐 거의 평균이 0이고 표준 편차를 1로 유지합니다. 이를 통해 그레이디언트 폭주와 소실 문제를 피할 수 있습니다:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # 표준화된 입력
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun 초기화
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
쉽게 SELU를 사용할 수 있습니다:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
100개의 은닉층과 SELU 활성화 함수를 사용한 패션 MNIST를 위한 신경망을 만들어 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
이제 훈련해 보죠. 입력을 평균 0과 표준 편차 1로 바꾸어야 한다는 것을 잊지 마세요:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 7ms/step - loss: 1.7729 - accuracy: 0.3071 - val_loss: 1.3899 - val_accuracy: 0.4524
Epoch 2/5
1719/1719 [==============================] - 12s 7ms/step - loss: 1.0602 - accuracy: 0.5860 - val_loss: 0.7820 - val_accuracy: 0.7254
Epoch 3/5
1719/1719 [==============================] - 12s 7ms/step - loss: 0.8437 - accuracy: 0.6949 - val_loss: 0.7487 - val_accuracy: 0.7022
Epoch 4/5
1719/1719 [==============================] - 13s 7ms/step - loss: 0.7273 - accuracy: 0.7344 - val_loss: 0.8475 - val_accuracy: 0.6896
Epoch 5/5
1719/1719 [==============================] - 12s 7ms/step - loss: 0.7665 - accuracy: 0.7226 - val_loss: 0.6661 - val_accuracy: 0.7730
###Markdown
대신 ReLU 활성화 함수를 사용하면 어떤 일이 일어나는지 확인해 보죠:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 13s 7ms/step - loss: 1.8148 - accuracy: 0.2571 - val_loss: 1.3302 - val_accuracy: 0.4382
Epoch 2/5
1719/1719 [==============================] - 12s 7ms/step - loss: 1.1549 - accuracy: 0.5033 - val_loss: 0.8720 - val_accuracy: 0.6658
Epoch 3/5
1719/1719 [==============================] - 12s 7ms/step - loss: 0.9585 - accuracy: 0.6099 - val_loss: 1.0348 - val_accuracy: 0.5828
Epoch 4/5
1719/1719 [==============================] - 13s 7ms/step - loss: 0.8446 - accuracy: 0.6597 - val_loss: 0.8025 - val_accuracy: 0.6584
Epoch 5/5
1719/1719 [==============================] - 12s 7ms/step - loss: 0.8232 - accuracy: 0.6630 - val_loss: 0.7265 - val_accuracy: 0.7096
###Markdown
좋지 않군요. 그레이디언트 폭주나 소실 문제가 발생한 것입니다. 배치 정규화
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.8750 - accuracy: 0.7124 - val_loss: 0.5525 - val_accuracy: 0.8228
Epoch 2/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5753 - accuracy: 0.8030 - val_loss: 0.4724 - val_accuracy: 0.8470
Epoch 3/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5189 - accuracy: 0.8203 - val_loss: 0.4375 - val_accuracy: 0.8550
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4827 - accuracy: 0.8323 - val_loss: 0.4151 - val_accuracy: 0.8606
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4565 - accuracy: 0.8407 - val_loss: 0.3997 - val_accuracy: 0.8638
Epoch 6/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4398 - accuracy: 0.8472 - val_loss: 0.3867 - val_accuracy: 0.8698
Epoch 7/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4241 - accuracy: 0.8514 - val_loss: 0.3763 - val_accuracy: 0.8704
Epoch 8/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4143 - accuracy: 0.8539 - val_loss: 0.3713 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4024 - accuracy: 0.8580 - val_loss: 0.3631 - val_accuracy: 0.8748
Epoch 10/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3914 - accuracy: 0.8624 - val_loss: 0.3572 - val_accuracy: 0.8754
###Markdown
이따금 활성화 함수전에 BN을 적용해도 잘 동작합니다(여기에는 논란의 여지가 있습니다). 또한 `BatchNormalization` 층 이전의 층은 편향을 위한 항이 필요 없습니다. `BatchNormalization` 층이 이를 무효화하기 때문입니다. 따라서 필요 없는 파라미터이므로 `use_bias=False`를 지정하여 층을 만들 수 있습니다:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 4s 2ms/step - loss: 1.0317 - accuracy: 0.6757 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.6790 - accuracy: 0.7793 - val_loss: 0.5566 - val_accuracy: 0.8182
Epoch 3/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5960 - accuracy: 0.8037 - val_loss: 0.5007 - val_accuracy: 0.8362
Epoch 4/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5447 - accuracy: 0.8191 - val_loss: 0.4666 - val_accuracy: 0.8450
Epoch 5/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.5109 - accuracy: 0.8280 - val_loss: 0.4433 - val_accuracy: 0.8536
Epoch 6/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4898 - accuracy: 0.8337 - val_loss: 0.4262 - val_accuracy: 0.8548
Epoch 7/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4712 - accuracy: 0.8396 - val_loss: 0.4130 - val_accuracy: 0.8568
Epoch 8/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4560 - accuracy: 0.8441 - val_loss: 0.4034 - val_accuracy: 0.8608
Epoch 9/10
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4441 - accuracy: 0.8474 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4333 - accuracy: 0.8504 - val_loss: 0.3874 - val_accuracy: 0.8660
###Markdown
그레이디언트 클리핑 모든 케라스 옵티마이저는 `clipnorm`이나 `clipvalue` 매개변수를 지원합니다:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
사전 훈련된 층 재사용하기 케라스 모델 재사용하기 패션 MNIST 훈련 세트를 두 개로 나누어 보죠:* `X_train_A`: 샌달과 셔츠(클래스 5와 6)을 제외한 모든 이미지* `X_train_B`: 샌달과 셔츠 이미지 중 처음 200개만 가진 작은 훈련 세트검증 세트와 테스트 세트도 이렇게 나눕니다. 하지만 이미지 개수는 제한하지 않습니다.A 세트(8개의 클래스를 가진 분류 문제)에서 모델을 훈련하고 이를 재사용하여 B 세트(이진 분류)를 해결해 보겠습니다. A 작업에서 B 작업으로 약간의 지식이 전달되기를 기대합니다. 왜냐하면 A 세트의 클래스(스니커즈, 앵클 부츠, 코트, 티셔츠 등)가 B 세트에 있는 클래스(샌달과 셔츠)와 조금 비슷하기 때문입니다. 하지만 `Dense` 층을 사용하기 때문에 동일한 위치에 나타난 패턴만 재사용할 수 있습니다(반대로 합성곱 층은 훨씬 많은 정보를 전송합니다. 학습한 패턴을 이미지의 어느 위치에서나 감지할 수 있기 때문입니다. CNN 장에서 자세히 알아 보겠습니다).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.5761 - accuracy: 0.6500 - val_loss: 0.5810 - val_accuracy: 0.6349
Epoch 2/4
7/7 [==============================] - 0s 14ms/step - loss: 0.5400 - accuracy: 0.6800 - val_loss: 0.5438 - val_accuracy: 0.6795
Epoch 3/4
7/7 [==============================] - 0s 22ms/step - loss: 0.5035 - accuracy: 0.7300 - val_loss: 0.5120 - val_accuracy: 0.7099
Epoch 4/4
7/7 [==============================] - 0s 21ms/step - loss: 0.4722 - accuracy: 0.7500 - val_loss: 0.4835 - val_accuracy: 0.7302
Epoch 1/16
7/7 [==============================] - 0s 22ms/step - loss: 0.3941 - accuracy: 0.8200 - val_loss: 0.3446 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 12ms/step - loss: 0.2785 - accuracy: 0.9350 - val_loss: 0.2595 - val_accuracy: 0.9270
Epoch 3/16
7/7 [==============================] - 0s 15ms/step - loss: 0.2076 - accuracy: 0.9650 - val_loss: 0.2104 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 14ms/step - loss: 0.1665 - accuracy: 0.9800 - val_loss: 0.1787 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 23ms/step - loss: 0.1394 - accuracy: 0.9800 - val_loss: 0.1558 - val_accuracy: 0.9767
Epoch 6/16
7/7 [==============================] - 0s 17ms/step - loss: 0.1195 - accuracy: 0.9950 - val_loss: 0.1391 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 18ms/step - loss: 0.1049 - accuracy: 0.9950 - val_loss: 0.1265 - val_accuracy: 0.9848
Epoch 8/16
7/7 [==============================] - 0s 15ms/step - loss: 0.0937 - accuracy: 1.0000 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0847 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 19ms/step - loss: 0.0762 - accuracy: 1.0000 - val_loss: 0.1000 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 18ms/step - loss: 0.0704 - accuracy: 1.0000 - val_loss: 0.0940 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 20ms/step - loss: 0.0649 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 22ms/step - loss: 0.0603 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 27ms/step - loss: 0.0559 - accuracy: 1.0000 - val_loss: 0.0803 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 19ms/step - loss: 0.0526 - accuracy: 1.0000 - val_loss: 0.0769 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 21ms/step - loss: 0.0496 - accuracy: 1.0000 - val_loss: 0.0739 - val_accuracy: 0.9899
###Markdown
마지막 점수는 어떤가요?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 2ms/step - loss: 0.0681 - accuracy: 0.9935
###Markdown
훌륭하네요! 꽤 많은 정보를 전달했습니다: 오차율이 4배나 줄었네요!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
고속 옵티마이저 모멘텀 옵티마이저
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
네스테로프 가속 경사
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam 옵티마이저
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax 옵티마이저
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam 옵티마이저
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
학습률 스케줄링 거듭제곱 스케줄링 ```lr = lr0 / (1 + steps / s)**c```* 케라스는 `c=1`과 `s = 1 / decay`을 사용합니다
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
지수 기반 스케줄링 ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
이 스케줄 함수는 두 번째 매개변수로 현재 학습률을 받을 수 있습니다:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
에포크가 아니라 반복마다 학습률을 업데이트하려면 사용자 정의 콜백 클래스를 작성해야 합니다:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# 노트: 에포크마다 `batch` 매개변수가 재설정됩니다
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # 20 에포크 동안 스텝 횟수 (배치 크기 = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
기간별 고정 스케줄링
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
성능 기반 스케줄링
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras 스케줄러
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4894 - accuracy: 0.8274 - val_loss: 0.4092 - val_accuracy: 0.8604
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3820 - accuracy: 0.8652 - val_loss: 0.3739 - val_accuracy: 0.8688
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3487 - accuracy: 0.8766 - val_loss: 0.3735 - val_accuracy: 0.8682
Epoch 4/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3263 - accuracy: 0.8839 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3104 - accuracy: 0.8898 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2958 - accuracy: 0.8950 - val_loss: 0.3415 - val_accuracy: 0.8802
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8988 - val_loss: 0.3356 - val_accuracy: 0.8816
Epoch 8/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2760 - accuracy: 0.9016 - val_loss: 0.3364 - val_accuracy: 0.8818
Epoch 9/25
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2677 - accuracy: 0.9052 - val_loss: 0.3265 - val_accuracy: 0.8852
Epoch 10/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2608 - accuracy: 0.9069 - val_loss: 0.3240 - val_accuracy: 0.8852
Epoch 11/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2551 - accuracy: 0.9086 - val_loss: 0.3252 - val_accuracy: 0.8868
Epoch 12/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2496 - accuracy: 0.9126 - val_loss: 0.3303 - val_accuracy: 0.8820
Epoch 13/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2449 - accuracy: 0.9137 - val_loss: 0.3219 - val_accuracy: 0.8868
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2415 - accuracy: 0.9146 - val_loss: 0.3222 - val_accuracy: 0.8862
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2375 - accuracy: 0.9166 - val_loss: 0.3209 - val_accuracy: 0.8876
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2343 - accuracy: 0.9179 - val_loss: 0.3183 - val_accuracy: 0.8890
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2316 - accuracy: 0.9185 - val_loss: 0.3195 - val_accuracy: 0.8890
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2290 - accuracy: 0.9196 - val_loss: 0.3167 - val_accuracy: 0.8910
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2269 - accuracy: 0.9206 - val_loss: 0.3195 - val_accuracy: 0.8890
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2250 - accuracy: 0.9222 - val_loss: 0.3167 - val_accuracy: 0.8896
Epoch 21/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2228 - accuracy: 0.9223 - val_loss: 0.3178 - val_accuracy: 0.8902
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2215 - accuracy: 0.9224 - val_loss: 0.3161 - val_accuracy: 0.8916
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2200 - accuracy: 0.9230 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 24/25
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2187 - accuracy: 0.9242 - val_loss: 0.3164 - val_accuracy: 0.8902
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2179 - accuracy: 0.9243 - val_loss: 0.3163 - val_accuracy: 0.8912
###Markdown
구간별 고정 스케줄링은 다음을 사용하세요:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1사이클 스케줄링
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8522
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4114 - val_accuracy: 0.8590
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8686
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8682
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8776 - val_loss: 0.3744 - val_accuracy: 0.8712
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3631 - val_accuracy: 0.8710
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8863 - val_loss: 0.3953 - val_accuracy: 0.8616
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3064 - accuracy: 0.8889 - val_loss: 0.3490 - val_accuracy: 0.8766
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2944 - accuracy: 0.8924 - val_loss: 0.3397 - val_accuracy: 0.8804
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8962 - val_loss: 0.3459 - val_accuracy: 0.8806
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9022 - val_loss: 0.3639 - val_accuracy: 0.8702
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9082 - val_loss: 0.3358 - val_accuracy: 0.8836
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2403 - accuracy: 0.9140 - val_loss: 0.3466 - val_accuracy: 0.8812
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2278 - accuracy: 0.9182 - val_loss: 0.3256 - val_accuracy: 0.8856
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2158 - accuracy: 0.9233 - val_loss: 0.3301 - val_accuracy: 0.8834
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2061 - accuracy: 0.9263 - val_loss: 0.3349 - val_accuracy: 0.8868
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1977 - accuracy: 0.9302 - val_loss: 0.3241 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1891 - accuracy: 0.9340 - val_loss: 0.3234 - val_accuracy: 0.8904
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1820 - accuracy: 0.9369 - val_loss: 0.3223 - val_accuracy: 0.8924
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1751 - accuracy: 0.9400 - val_loss: 0.3219 - val_accuracy: 0.8912
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1699 - accuracy: 0.9422 - val_loss: 0.3180 - val_accuracy: 0.8938
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1654 - accuracy: 0.9436 - val_loss: 0.3185 - val_accuracy: 0.8942
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1626 - accuracy: 0.9453 - val_loss: 0.3175 - val_accuracy: 0.8940
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1609 - accuracy: 0.9463 - val_loss: 0.3168 - val_accuracy: 0.8940
###Markdown
규제를 사용해 과대적합 피하기 $\ell_1$과 $\ell_2$ 규제
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 2ms/step - loss: 1.6313 - accuracy: 0.8113 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 3s 2ms/step - loss: 0.7187 - accuracy: 0.8273 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
드롭아웃
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5838 - accuracy: 0.7998 - val_loss: 0.3730 - val_accuracy: 0.8642
Epoch 2/2
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4209 - accuracy: 0.8442 - val_loss: 0.3414 - val_accuracy: 0.8726
###Markdown
알파 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4161 - accuracy: 0.8465
###Markdown
MC 드롭아웃
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
이제 MC 드롭아웃을 모델에 사용할 수 있습니다:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
맥스 노름
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 4s 2ms/step - loss: 0.4752 - accuracy: 0.8330 - val_loss: 0.3853 - val_accuracy: 0.8628
Epoch 2/2
1719/1719 [==============================] - 4s 2ms/step - loss: 0.3532 - accuracy: 0.8727 - val_loss: 0.3725 - val_accuracy: 0.8692
###Markdown
연습문제 해답 1. to 7. 부록 A 참조. 8. CIFAR10에서 딥러닝 a.*문제: 100개의 뉴런을 가진 은닉층 20개로 심층 신경망을 만들어보세요(너무 많은 것 같지만 이 연습문제의 핵심입니다). He 초기화와 ELU 활성화 함수를 사용하세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*문제: Nadam 옵티마이저와 조기 종료를 사용하여 CIFAR10 데이터셋에 이 네트워크를 훈련하세요. `keras.datasets.cifar10.load_ data()`를 사용하여 데이터를 적재할 수 있습니다. 이 데이터셋은 10개의 클래스와 32×32 크기의 컬러 이미지 60,000개로 구성됩니다(50,000개는 훈련, 10,000개는 테스트). 따라서 10개의 뉴런과 소프트맥스 활성화 함수를 사용하는 출력층이 필요합니다. 모델 구조와 하이퍼파라미터를 바꿀 때마다 적절한 학습률을 찾아야 한다는 것을 기억하세요.* 모델에 출력층을 추가합니다:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
학습률 5e-5인 Nadam 옵티마이저를 사용해 보죠. 학습률 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3, 1e-2를 테스트하고 10번의 에포크 동안 (아래 텐서보드 콜백으로) 학습 곡선을 비교해 보았습니다. 학습률 3e-5와 1e-4가 꽤 좋았기 때문에 5e-5를 시도해 보았고 조금 더 나은 결과를 냈습니다.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
CIFAR10 데이터셋을 로드하죠. 조기 종료를 사용하기 때문에 검증 세트가 필요합니다. 원본 훈련 세트에서 처음 5,000개를 검증 세트로 사용하겠습니다:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
이제 콜백을 만들고 모델을 훈련합니다:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 1.5307 - accuracy: 0.4664
###Markdown
가장 낮은 검증 손실을 내는 모델은 검증 세트에서 약 47% 정확도를 얻었습니다. 이 검증 점수에 도달하는데 39번의 에포크가 걸렸습니다. (GPU가 없는) 제 노트북에서 에포크당 약 10초 정도 걸렸습니다. 배치 정규화를 사용해 성능을 올릴 수 있는지 확인해 보죠. c.*문제: 배치 정규화를 추가하고 학습 곡선을 비교해보세요. 이전보다 빠르게 수렴하나요? 더 좋은 모델이 만들어지나요? 훈련 속도에는 어떤 영향을 미치나요?* 다음 코드는 위의 코드와 배우 비슷합니다. 몇 가지 다른 점은 아래와 같습니다:* 출력층을 제외하고 모든 `Dense` 층 다음에 (활성화 함수 전에) BN 층을 추가했습니다. 처음 은닉층 전에도 BN 층을 추가했습니다.* 학습률을 5e-4로 바꾸었습니다. 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3, 3e-3를 시도해 보고 20번 에포크 후에 검증 세트 성능이 가장 좋은 것을 선택했습니다.* run_logdir를 run_bn_* 으로 이름을 바꾸고 모델 파일 이름을 my_cifar10_bn_model.h5로 변경했습니다.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
2/1407 [..............................] - ETA: 5:11 - loss: 2.8693 - accuracy: 0.1094WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.217539). Check your callbacks.
1407/1407 [==============================] - 25s 18ms/step - loss: 1.8415 - accuracy: 0.3395 - val_loss: 1.6539 - val_accuracy: 0.4120
Epoch 2/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.6691 - accuracy: 0.4041 - val_loss: 1.6265 - val_accuracy: 0.4260
Epoch 3/100
1407/1407 [==============================] - 25s 18ms/step - loss: 1.5970 - accuracy: 0.4307 - val_loss: 1.5901 - val_accuracy: 0.4284
Epoch 4/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.5495 - accuracy: 0.4476 - val_loss: 1.4981 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.5037 - accuracy: 0.4658 - val_loss: 1.4604 - val_accuracy: 0.4792
Epoch 6/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.4725 - accuracy: 0.4775 - val_loss: 1.4212 - val_accuracy: 0.4952
Epoch 7/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.4373 - accuracy: 0.4895 - val_loss: 1.4383 - val_accuracy: 0.4858
Epoch 8/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.4100 - accuracy: 0.5025 - val_loss: 1.4018 - val_accuracy: 0.5016
Epoch 9/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.3902 - accuracy: 0.5082 - val_loss: 1.3647 - val_accuracy: 0.5104
Epoch 10/100
1407/1407 [==============================] - 25s 18ms/step - loss: 1.3678 - accuracy: 0.5145 - val_loss: 1.3351 - val_accuracy: 0.5318
Epoch 11/100
1407/1407 [==============================] - 25s 17ms/step - loss: 1.3460 - accuracy: 0.5217 - val_loss: 1.3598 - val_accuracy: 0.5198
Epoch 12/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.3193 - accuracy: 0.5326 - val_loss: 1.3929 - val_accuracy: 0.5014
Epoch 13/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.3034 - accuracy: 0.5371 - val_loss: 1.3755 - val_accuracy: 0.5132
Epoch 14/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.2847 - accuracy: 0.5439 - val_loss: 1.3422 - val_accuracy: 0.5332
Epoch 15/100
1407/1407 [==============================] - 25s 18ms/step - loss: 1.2654 - accuracy: 0.5520 - val_loss: 1.3581 - val_accuracy: 0.5182
Epoch 16/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.2552 - accuracy: 0.5564 - val_loss: 1.3784 - val_accuracy: 0.5198
Epoch 17/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.2381 - accuracy: 0.5610 - val_loss: 1.3106 - val_accuracy: 0.5402
Epoch 18/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.2205 - accuracy: 0.5688 - val_loss: 1.3164 - val_accuracy: 0.5388
Epoch 19/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.2072 - accuracy: 0.5723 - val_loss: 1.3458 - val_accuracy: 0.5242
Epoch 20/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1982 - accuracy: 0.5765 - val_loss: 1.3550 - val_accuracy: 0.5186
Epoch 21/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1813 - accuracy: 0.5822 - val_loss: 1.3579 - val_accuracy: 0.5298
Epoch 22/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1640 - accuracy: 0.5911 - val_loss: 1.3394 - val_accuracy: 0.5308
Epoch 23/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1536 - accuracy: 0.5946 - val_loss: 1.3336 - val_accuracy: 0.5378
Epoch 24/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1423 - accuracy: 0.5996 - val_loss: 1.3270 - val_accuracy: 0.5400
Epoch 25/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.1320 - accuracy: 0.6010 - val_loss: 1.3508 - val_accuracy: 0.5376
Epoch 26/100
1407/1407 [==============================] - 23s 17ms/step - loss: 1.1162 - accuracy: 0.6062 - val_loss: 1.3527 - val_accuracy: 0.5228
Epoch 27/100
1407/1407 [==============================] - 23s 17ms/step - loss: 1.1052 - accuracy: 0.6106 - val_loss: 1.3388 - val_accuracy: 0.5456
Epoch 28/100
1407/1407 [==============================] - 23s 17ms/step - loss: 1.1021 - accuracy: 0.6120 - val_loss: 1.3551 - val_accuracy: 0.5320
Epoch 29/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0863 - accuracy: 0.6201 - val_loss: 1.3624 - val_accuracy: 0.5260
Epoch 30/100
1407/1407 [==============================] - 25s 18ms/step - loss: 1.0755 - accuracy: 0.6208 - val_loss: 1.3561 - val_accuracy: 0.5340
Epoch 31/100
1407/1407 [==============================] - 25s 18ms/step - loss: 1.0631 - accuracy: 0.6265 - val_loss: 1.3543 - val_accuracy: 0.5332
Epoch 32/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0535 - accuracy: 0.6294 - val_loss: 1.3839 - val_accuracy: 0.5282
Epoch 33/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0423 - accuracy: 0.6297 - val_loss: 1.3486 - val_accuracy: 0.5474
Epoch 34/100
1407/1407 [==============================] - 25s 17ms/step - loss: 1.0321 - accuracy: 0.6349 - val_loss: 1.3439 - val_accuracy: 0.5530
Epoch 35/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0219 - accuracy: 0.6401 - val_loss: 1.3573 - val_accuracy: 0.5410
Epoch 36/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0190 - accuracy: 0.6398 - val_loss: 1.3800 - val_accuracy: 0.5362
Epoch 37/100
1407/1407 [==============================] - 24s 17ms/step - loss: 1.0024 - accuracy: 0.6459 - val_loss: 1.3789 - val_accuracy: 0.5358
157/157 [==============================] - 0s 3ms/step - loss: 1.3106 - accuracy: 0.5402
###Markdown
* *이전보다 빠르게 수렴하나요?* 훨씬 빠릅니다! 이전 모델은 가장 낮은 검증 손실에 도달하기 위해 39 에포크가 걸렸지만 BN을 사용한 새 모델은 18 에포크가 걸렸습니다. 이전 모델보다 두 배 이상 빠릅니다. BN 층은 훈련을 안정적으로 수행하고 더 큰 학습률을 사용할 수 있기 때문에 수렴이 빨라졌습니다.* *BN이 더 좋은 모델을 만드나요?* 네! 최종 모델의 성능이 47%가 아니라 55% 정확도로 더 좋습니다. 이는 아주 좋은 모델이 아니지만 적어도 이전보다는 낫습니다(합성곱 신경망이 더 낫겠지만 이는 다른 주제입니다. 14장을 참고하세요).* *BN이 훈련 속도에 영향을 미치나요?* 모델이 두 배나 빠르게 수렴했지만 각 에포크는 10초가 아니라 16초가 걸렸습니다. BN 층에서 추가된 계산 때문입니다. 따라서 전체적으로 에포크 횟수가 50% 정도 줄었지만 훈련 시간(탁상 시계 시간)은 30% 정도 줄었습니다. 결국 크게 향상되었습니다! d.*문제: 배치 정규화를 SELU로 바꾸어보세요. 네트워크가 자기 정규화하기 위해 필요한 변경 사항을 적용해보세요(즉, 입력 특성 표준화, 르쿤 정규분포 초기화, 완전 연결 층만 순차적으로 쌓은 심층 신경망 등).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 1.4754 - accuracy: 0.5036
###Markdown
51.4% 정확도를 얻었습니다. 원래 모델보다 더 좋습니다. 하지만 배치 정규화를 사용한 모델만큼 좋지는 않습니다. 최고의 모델에 도달하는데 13 에포크가 걸렸습니다. 이는 원본 모델이나 BN 모델보다 더 빠른 것입니다. 각 에포크는 원본 모델처럼 10초만 걸렸습니다. 따라서 이 모델이 지금까지 가장 빠른 모델입니다(에포크와 탁상 시계 기준으로). e.*문제: 알파 드롭아웃으로 모델에 규제를 적용해보세요. 그다음 모델을 다시 훈련하지 않고 MC 드롭아웃으로 더 높은 정확도를 얻을 수 있는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # 모델을 훈련할 때마다 증가시킴
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.8937 - accuracy: 0.3228 - val_loss: 1.7207 - val_accuracy: 0.3868
Epoch 2/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.6668 - accuracy: 0.4112 - val_loss: 1.8192 - val_accuracy: 0.3636
Epoch 3/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.5786 - accuracy: 0.4474 - val_loss: 1.6313 - val_accuracy: 0.4296
Epoch 4/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.5094 - accuracy: 0.4694 - val_loss: 1.5781 - val_accuracy: 0.4556
Epoch 5/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.4543 - accuracy: 0.4921 - val_loss: 1.5680 - val_accuracy: 0.4566
Epoch 6/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.4049 - accuracy: 0.5085 - val_loss: 1.5289 - val_accuracy: 0.4720
Epoch 7/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.3581 - accuracy: 0.5273 - val_loss: 1.6202 - val_accuracy: 0.4554
Epoch 8/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.3217 - accuracy: 0.5410 - val_loss: 1.5328 - val_accuracy: 0.4796
Epoch 9/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.2852 - accuracy: 0.5565 - val_loss: 1.5200 - val_accuracy: 0.4816
Epoch 10/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.2537 - accuracy: 0.5644 - val_loss: 1.5725 - val_accuracy: 0.4908
Epoch 11/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.2174 - accuracy: 0.5785 - val_loss: 1.5849 - val_accuracy: 0.4938
Epoch 12/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.1866 - accuracy: 0.5924 - val_loss: 1.5408 - val_accuracy: 0.4946
Epoch 13/100
1407/1407 [==============================] - 9s 7ms/step - loss: 1.1584 - accuracy: 0.5984 - val_loss: 1.5857 - val_accuracy: 0.5004
Epoch 14/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.1339 - accuracy: 0.6118 - val_loss: 1.5923 - val_accuracy: 0.5056
Epoch 15/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.1041 - accuracy: 0.6198 - val_loss: 1.6159 - val_accuracy: 0.5058
Epoch 16/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.0774 - accuracy: 0.6297 - val_loss: 1.6401 - val_accuracy: 0.5010
Epoch 17/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.0570 - accuracy: 0.6362 - val_loss: 1.6805 - val_accuracy: 0.5112
Epoch 18/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.0391 - accuracy: 0.6434 - val_loss: 1.6258 - val_accuracy: 0.5144
Epoch 19/100
1407/1407 [==============================] - 9s 7ms/step - loss: 1.0158 - accuracy: 0.6525 - val_loss: 1.7472 - val_accuracy: 0.5030
Epoch 20/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9988 - accuracy: 0.6595 - val_loss: 1.6717 - val_accuracy: 0.5076
Epoch 21/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9761 - accuracy: 0.6634 - val_loss: 1.7323 - val_accuracy: 0.5014
Epoch 22/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9518 - accuracy: 0.6729 - val_loss: 1.7079 - val_accuracy: 0.5040
Epoch 23/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9377 - accuracy: 0.6813 - val_loss: 1.7129 - val_accuracy: 0.5012
Epoch 24/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.1935 - accuracy: 0.6054 - val_loss: 1.6416 - val_accuracy: 0.4612
Epoch 25/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.1726 - accuracy: 0.5930 - val_loss: 1.6336 - val_accuracy: 0.5068
Epoch 26/100
1407/1407 [==============================] - 10s 7ms/step - loss: 1.0573 - accuracy: 0.6332 - val_loss: 1.6119 - val_accuracy: 0.5010
Epoch 27/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9830 - accuracy: 0.6601 - val_loss: 1.6229 - val_accuracy: 0.5116
Epoch 28/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9373 - accuracy: 0.6779 - val_loss: 1.6697 - val_accuracy: 0.5130
Epoch 29/100
1407/1407 [==============================] - 10s 7ms/step - loss: 0.9030 - accuracy: 0.6893 - val_loss: 1.8191 - val_accuracy: 0.5062
157/157 [==============================] - 0s 2ms/step - loss: 1.5200 - accuracy: 0.4816
###Markdown
이 모델은 검증 세트에서 50.8% 정확도에 도달합니다. 드롭아웃이 없을 때보다(51.4%) 조금 더 나쁩니다. 하이퍼파라미터 탐색을 좀 많이 수행해 보면 더 나아 질 수 있습니다(드롭아웃 비율 5%, 10%, 20%, 40%과 학습률 1e-4, 3e-4, 5e-4, 1e-3을 시도했습니다). 하지만 이 경우에는 크지 않을 것 같습니다. 이제 MC 드롭아웃을 사용해 보죠. 앞서 사용한 `MCAlphaDropout` 클래스를 복사해 사용하겠습니다:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
방금 훈련했던 모델과 (같은 가중치를 가진) 동일한 새로운 모델을 만들어 보죠. 하지만 `AlphaDropout` 층 대신 `MCAlphaDropout` 드롭아웃 층을 사용합니다:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
그다음 몇 가지 유틸리티 함수를 추가합니다. 첫 번째 함수는 모델을 여러 번 실행합니다(기본적으로 10번). 그다음 평균한 예측 클래스 확률을 반환합니다. 두 번째 함수는 이 평균 확률을 사용해 각 샘플의 클래스를 예측합니다:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
이제 검증 세트의 모든 샘플에 대해 예측을 만들고 정확도를 계산해 보죠:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
이 경우에는 실제적인 정확도 향상이 없습니다(50.8%에서 50.9%).따라서 이 연습문에서 얻은 최상의 모델은 배치 정규화 모델입니다. f.*문제: 1사이클 스케줄링으로 모델을 다시 훈련하고 훈련 속도와 모델 정확도가 향상되는지 확인해보세요.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 1s 4ms/step - loss: 2.0539 - accuracy: 0.2884 - val_loss: 1.7958 - val_accuracy: 0.3850
Epoch 2/15
352/352 [==============================] - 1s 4ms/step - loss: 1.7618 - accuracy: 0.3799 - val_loss: 1.6774 - val_accuracy: 0.4146
Epoch 3/15
352/352 [==============================] - 1s 3ms/step - loss: 1.6205 - accuracy: 0.4267 - val_loss: 1.6370 - val_accuracy: 0.4316
Epoch 4/15
352/352 [==============================] - 1s 4ms/step - loss: 1.5417 - accuracy: 0.4569 - val_loss: 1.6533 - val_accuracy: 0.4284
Epoch 5/15
352/352 [==============================] - 1s 4ms/step - loss: 1.4912 - accuracy: 0.4704 - val_loss: 1.6111 - val_accuracy: 0.4506
Epoch 6/15
352/352 [==============================] - 1s 4ms/step - loss: 1.4522 - accuracy: 0.4852 - val_loss: 1.5521 - val_accuracy: 0.4630
Epoch 7/15
352/352 [==============================] - 2s 4ms/step - loss: 1.4111 - accuracy: 0.4990 - val_loss: 1.6209 - val_accuracy: 0.4410
Epoch 8/15
352/352 [==============================] - 1s 4ms/step - loss: 1.3476 - accuracy: 0.5226 - val_loss: 1.4934 - val_accuracy: 0.4788
Epoch 9/15
352/352 [==============================] - 2s 4ms/step - loss: 1.2723 - accuracy: 0.5482 - val_loss: 1.5100 - val_accuracy: 0.4842
Epoch 10/15
352/352 [==============================] - 1s 4ms/step - loss: 1.1988 - accuracy: 0.5750 - val_loss: 1.5216 - val_accuracy: 0.4998
Epoch 11/15
352/352 [==============================] - 1s 4ms/step - loss: 1.1329 - accuracy: 0.5984 - val_loss: 1.5251 - val_accuracy: 0.5074
Epoch 12/15
352/352 [==============================] - 1s 4ms/step - loss: 1.0625 - accuracy: 0.6217 - val_loss: 1.5088 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 1s 4ms/step - loss: 0.9945 - accuracy: 0.6454 - val_loss: 1.5114 - val_accuracy: 0.5260
Epoch 14/15
352/352 [==============================] - 2s 4ms/step - loss: 0.9292 - accuracy: 0.6699 - val_loss: 1.5292 - val_accuracy: 0.5310
Epoch 15/15
352/352 [==============================] - 1s 4ms/step - loss: 0.8901 - accuracy: 0.6831 - val_loss: 1.5513 - val_accuracy: 0.5314
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
leaky_relu = keras.layers.LeakyReLU(alpha=0.2)
layer = keras.layers.Dense(10, activation=leaky_relu)
layer.activation
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation=leaky_relu),
keras.layers.Dense(100, activation=leaky_relu),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 64us/sample - loss: 1.3979 - accuracy: 0.5948 - val_loss: 0.9369 - val_accuracy: 0.7162
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.8333 - accuracy: 0.7341 - val_loss: 0.7392 - val_accuracy: 0.7638
Epoch 3/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.7068 - accuracy: 0.7711 - val_loss: 0.6561 - val_accuracy: 0.7906
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6417 - accuracy: 0.7889 - val_loss: 0.6052 - val_accuracy: 0.8088
Epoch 5/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5988 - accuracy: 0.8019 - val_loss: 0.5716 - val_accuracy: 0.8166
Epoch 6/10
55000/55000 [==============================] - 3s 58us/sample - loss: 0.5686 - accuracy: 0.8118 - val_loss: 0.5465 - val_accuracy: 0.8234
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5460 - accuracy: 0.8181 - val_loss: 0.5273 - val_accuracy: 0.8314
Epoch 8/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.5281 - accuracy: 0.8229 - val_loss: 0.5108 - val_accuracy: 0.8370
Epoch 9/10
55000/55000 [==============================] - 3s 60us/sample - loss: 0.5137 - accuracy: 0.8261 - val_loss: 0.4985 - val_accuracy: 0.8398
Epoch 10/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.5018 - accuracy: 0.8289 - val_loss: 0.4901 - val_accuracy: 0.8382
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 30us/sample - loss: 0.4926 - accuracy: 0.8268 - val_loss: 0.4229 - val_accuracy: 0.8520
Epoch 2/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.3754 - accuracy: 0.8669 - val_loss: 0.3833 - val_accuracy: 0.8634
Epoch 3/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3433 - accuracy: 0.8776 - val_loss: 0.3687 - val_accuracy: 0.8666
Epoch 4/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.3198 - accuracy: 0.8854 - val_loss: 0.3595 - val_accuracy: 0.8738
Epoch 5/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3011 - accuracy: 0.8920 - val_loss: 0.3421 - val_accuracy: 0.8764
Epoch 6/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2873 - accuracy: 0.8973 - val_loss: 0.3371 - val_accuracy: 0.8814
Epoch 7/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2738 - accuracy: 0.9026 - val_loss: 0.3312 - val_accuracy: 0.8842
Epoch 8/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2633 - accuracy: 0.9071 - val_loss: 0.3338 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2543 - accuracy: 0.9098 - val_loss: 0.3296 - val_accuracy: 0.8840
Epoch 10/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2465 - accuracy: 0.9125 - val_loss: 0.3233 - val_accuracy: 0.8874
Epoch 11/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2406 - accuracy: 0.9157 - val_loss: 0.3215 - val_accuracy: 0.8874
Epoch 12/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9173 - val_loss: 0.3237 - val_accuracy: 0.8862
Epoch 13/25
55000/55000 [==============================] - 2s 27us/sample - loss: 0.2370 - accuracy: 0.9160 - val_loss: 0.3282 - val_accuracy: 0.8856
Epoch 14/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9157 - val_loss: 0.3228 - val_accuracy: 0.8874
Epoch 15/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2362 - accuracy: 0.9162 - val_loss: 0.3261 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2339 - accuracy: 0.9167 - val_loss: 0.3336 - val_accuracy: 0.8830
Epoch 17/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2319 - accuracy: 0.9166 - val_loss: 0.3316 - val_accuracy: 0.8818
Epoch 18/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2295 - accuracy: 0.9181 - val_loss: 0.3424 - val_accuracy: 0.8786
Epoch 19/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2266 - accuracy: 0.9186 - val_loss: 0.3356 - val_accuracy: 0.8844
Epoch 20/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2250 - accuracy: 0.9186 - val_loss: 0.3486 - val_accuracy: 0.8758
Epoch 21/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2221 - accuracy: 0.9189 - val_loss: 0.3443 - val_accuracy: 0.8856
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2184 - accuracy: 0.9201 - val_loss: 0.3889 - val_accuracy: 0.8700
Epoch 23/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2040 - accuracy: 0.9266 - val_loss: 0.3216 - val_accuracy: 0.8910
Epoch 24/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1750 - accuracy: 0.9401 - val_loss: 0.3153 - val_accuracy: 0.8932
Epoch 25/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1718 - accuracy: 0.9416 - val_loss: 0.3153 - val_accuracy: 0.8940
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
#with keras.backend.learning_phase_scope(1): # TODO: check https://github.com/tensorflow/tensorflow/issues/25754
# history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
with keras.backend.learning_phase_scope(1): # TODO: check https://github.com/tensorflow/tensorflow/issues/25754
y_probas = np.stack([model.predict(X_test_scaled) for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer has some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer has some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
###Output
_____no_output_____
###Markdown
Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:
###Code
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
model_B_on_A = keras.models.Sequential(model_A_clone.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 0s 29ms/step - loss: 0.2575 - accuracy: 0.9487 - val_loss: 0.2797 - val_accuracy: 0.9270
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2566 - accuracy: 0.9371 - val_loss: 0.2701 - val_accuracy: 0.9300
Epoch 3/4
7/7 [==============================] - 0s 9ms/step - loss: 0.2473 - accuracy: 0.9332 - val_loss: 0.2613 - val_accuracy: 0.9341
Epoch 4/4
7/7 [==============================] - 0s 10ms/step - loss: 0.2450 - accuracy: 0.9463 - val_loss: 0.2531 - val_accuracy: 0.9391
Epoch 1/16
7/7 [==============================] - 1s 29ms/step - loss: 0.2106 - accuracy: 0.9524 - val_loss: 0.2045 - val_accuracy: 0.9615
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1738 - accuracy: 0.9526 - val_loss: 0.1719 - val_accuracy: 0.9706
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1451 - accuracy: 0.9660 - val_loss: 0.1491 - val_accuracy: 0.9807
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1242 - accuracy: 0.9717 - val_loss: 0.1325 - val_accuracy: 0.9817
Epoch 5/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1078 - accuracy: 0.9855 - val_loss: 0.1200 - val_accuracy: 0.9848
Epoch 6/16
7/7 [==============================] - 0s 11ms/step - loss: 0.1075 - accuracy: 0.9931 - val_loss: 0.1101 - val_accuracy: 0.9858
Epoch 7/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0893 - accuracy: 0.9950 - val_loss: 0.1020 - val_accuracy: 0.9858
Epoch 8/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0815 - accuracy: 0.9950 - val_loss: 0.0953 - val_accuracy: 0.9868
Epoch 9/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0640 - accuracy: 0.9973 - val_loss: 0.0892 - val_accuracy: 0.9868
Epoch 10/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0641 - accuracy: 0.9931 - val_loss: 0.0844 - val_accuracy: 0.9878
Epoch 11/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0609 - accuracy: 0.9931 - val_loss: 0.0800 - val_accuracy: 0.9888
Epoch 12/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0641 - accuracy: 1.0000 - val_loss: 0.0762 - val_accuracy: 0.9888
Epoch 13/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0478 - accuracy: 1.0000 - val_loss: 0.0728 - val_accuracy: 0.9888
Epoch 14/16
7/7 [==============================] - 0s 10ms/step - loss: 0.0444 - accuracy: 1.0000 - val_loss: 0.0700 - val_accuracy: 0.9878
Epoch 15/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0490 - accuracy: 1.0000 - val_loss: 0.0675 - val_accuracy: 0.9878
Epoch 16/16
7/7 [==============================] - 0s 11ms/step - loss: 0.0434 - accuracy: 1.0000 - val_loss: 0.0652 - val_accuracy: 0.9878
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 751us/step - loss: 0.0562 - accuracy: 0.9940
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.9!
###Code
(100 - 97.05) / (100 - 99.40)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(learning_rate=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
import math
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = math.ceil(len(X_train) / batch_size)
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.learning_rate)
K.set_value(self.model.optimizer.learning_rate, lr * 0.1**(1 / self.s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.learning_rate)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(learning_rate=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.learning_rate))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = math.ceil(len(X) / batch_size) * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.learning_rate)
K.set_value(model.optimizer.learning_rate, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.learning_rate, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
###Output
_____no_output_____
###Markdown
**Warning**: In the `on_batch_end()` method, `logs["loss"]` used to contain the batch loss, but in TensorFlow 2.2.0 it was replaced with the mean loss (since the start of the epoch). This explains why the graph below is much smoother than in the book (if you are using TF 2.2 or above). It also means that there is a lag between the moment the batch loss starts exploding and the moment the explosion becomes clear in the graph. So you should choose a slightly smaller learning rate than you would have chosen with the "noisy" graph. Alternatively, you can tweak the `ExponentialLearningRate` callback above so it computes the batch loss (based on the current mean loss and the previous mean loss):```pythonclass ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_epoch_begin(self, epoch, logs=None): self.prev_loss = 0 def on_batch_end(self, batch, logs=None): batch_loss = logs["loss"] * (batch + 1) - self.prev_loss * batch self.prev_loss = logs["loss"] self.rates.append(K.get_value(self.model.optimizer.learning_rate)) self.losses.append(batch_loss) K.set_value(self.model.optimizer.learning_rate, self.model.optimizer.learning_rate * self.factor)```
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.learning_rate, rate)
n_epochs = 25
onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor of 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(learning_rate=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(learning_rate=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(learning_rate=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20) # enstantiating closure
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler]) # using closure
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4872 - accuracy: 0.8296 - val_loss: 0.4141 - val_accuracy: 0.8548
Epoch 2/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3829 - accuracy: 0.8643 - val_loss: 0.3773 - val_accuracy: 0.8704
Epoch 3/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3495 - accuracy: 0.8763 - val_loss: 0.3696 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3274 - accuracy: 0.8831 - val_loss: 0.3545 - val_accuracy: 0.8760
Epoch 5/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.3102 - accuracy: 0.8899 - val_loss: 0.3460 - val_accuracy: 0.8784
Epoch 6/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2971 - accuracy: 0.8945 - val_loss: 0.3415 - val_accuracy: 0.8796
Epoch 7/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2858 - accuracy: 0.8985 - val_loss: 0.3353 - val_accuracy: 0.8834
Epoch 8/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2767 - accuracy: 0.9018 - val_loss: 0.3321 - val_accuracy: 0.8854
Epoch 9/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2685 - accuracy: 0.9043 - val_loss: 0.3281 - val_accuracy: 0.8862
Epoch 10/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2612 - accuracy: 0.9075 - val_loss: 0.3304 - val_accuracy: 0.8832
Epoch 11/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2554 - accuracy: 0.9097 - val_loss: 0.3261 - val_accuracy: 0.8868
Epoch 12/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2502 - accuracy: 0.9115 - val_loss: 0.3246 - val_accuracy: 0.8876
Epoch 13/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2456 - accuracy: 0.9133 - val_loss: 0.3243 - val_accuracy: 0.8870
Epoch 14/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2416 - accuracy: 0.9141 - val_loss: 0.3238 - val_accuracy: 0.8862
Epoch 15/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2380 - accuracy: 0.9170 - val_loss: 0.3197 - val_accuracy: 0.8876
Epoch 16/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2346 - accuracy: 0.9169 - val_loss: 0.3207 - val_accuracy: 0.8866
Epoch 17/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2321 - accuracy: 0.9186 - val_loss: 0.3182 - val_accuracy: 0.8878
Epoch 18/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2291 - accuracy: 0.9191 - val_loss: 0.3206 - val_accuracy: 0.8884
Epoch 19/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2271 - accuracy: 0.9201 - val_loss: 0.3194 - val_accuracy: 0.8876
Epoch 20/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2252 - accuracy: 0.9215 - val_loss: 0.3178 - val_accuracy: 0.8880
Epoch 21/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2234 - accuracy: 0.9218 - val_loss: 0.3171 - val_accuracy: 0.8904
Epoch 22/25
55000/55000 [==============================] - 2s 41us/sample - loss: 0.2218 - accuracy: 0.9230 - val_loss: 0.3171 - val_accuracy: 0.8884
Epoch 23/25
55000/55000 [==============================] - 2s 40us/sample - loss: 0.2204 - accuracy: 0.9227 - val_loss: 0.3168 - val_accuracy: 0.8882
Epoch 24/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2191 - accuracy: 0.9240 - val_loss: 0.3173 - val_accuracy: 0.8900
Epoch 25/25
55000/55000 [==============================] - 2s 39us/sample - loss: 0.2182 - accuracy: 0.9239 - val_loss: 0.3166 - val_accuracy: 0.8892
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.6576 - accuracy: 0.7743 - val_loss: 0.4901 - val_accuracy: 0.8300
Epoch 2/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.4587 - accuracy: 0.8387 - val_loss: 0.4316 - val_accuracy: 0.8490
Epoch 3/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.4119 - accuracy: 0.8560 - val_loss: 0.4117 - val_accuracy: 0.8580
Epoch 4/25
55000/55000 [==============================] - 1s 23us/sample - loss: 0.3842 - accuracy: 0.8657 - val_loss: 0.3920 - val_accuracy: 0.8638
Epoch 5/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3636 - accuracy: 0.8708 - val_loss: 0.3739 - val_accuracy: 0.8710
Epoch 6/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3460 - accuracy: 0.8767 - val_loss: 0.3742 - val_accuracy: 0.8690
Epoch 7/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.3312 - accuracy: 0.8818 - val_loss: 0.3760 - val_accuracy: 0.8656
Epoch 8/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.3194 - accuracy: 0.8846 - val_loss: 0.3583 - val_accuracy: 0.8756
Epoch 9/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.3056 - accuracy: 0.8902 - val_loss: 0.3474 - val_accuracy: 0.8820
Epoch 10/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2943 - accuracy: 0.8937 - val_loss: 0.3993 - val_accuracy: 0.8562
Epoch 11/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2845 - accuracy: 0.8957 - val_loss: 0.3446 - val_accuracy: 0.8820
Epoch 12/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2720 - accuracy: 0.9020 - val_loss: 0.3348 - val_accuracy: 0.8808
Epoch 13/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.2536 - accuracy: 0.9094 - val_loss: 0.3386 - val_accuracy: 0.8822
Epoch 14/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.2420 - accuracy: 0.9125 - val_loss: 0.3313 - val_accuracy: 0.8858
Epoch 15/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.2288 - accuracy: 0.9174 - val_loss: 0.3241 - val_accuracy: 0.8840
Epoch 16/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2169 - accuracy: 0.9222 - val_loss: 0.3342 - val_accuracy: 0.8846
Epoch 17/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.2067 - accuracy: 0.9264 - val_loss: 0.3208 - val_accuracy: 0.8874
Epoch 18/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1977 - accuracy: 0.9301 - val_loss: 0.3186 - val_accuracy: 0.8888
Epoch 19/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1892 - accuracy: 0.9329 - val_loss: 0.3278 - val_accuracy: 0.8848
Epoch 20/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1818 - accuracy: 0.9375 - val_loss: 0.3195 - val_accuracy: 0.8894
Epoch 21/25
55000/55000 [==============================] - 1s 20us/sample - loss: 0.1756 - accuracy: 0.9395 - val_loss: 0.3163 - val_accuracy: 0.8948
Epoch 22/25
55000/55000 [==============================] - 1s 21us/sample - loss: 0.1701 - accuracy: 0.9416 - val_loss: 0.3177 - val_accuracy: 0.8920
Epoch 23/25
55000/55000 [==============================] - 1s 22us/sample - loss: 0.1657 - accuracy: 0.9441 - val_loss: 0.3168 - val_accuracy: 0.8944
Epoch 24/25
55000/55000 [==============================] - 1s 19us/sample - loss: 0.1629 - accuracy: 0.9454 - val_loss: 0.3167 - val_accuracy: 0.8946
Epoch 25/25
55000/55000 [==============================] - 1s 18us/sample - loss: 0.1611 - accuracy: 0.9465 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 133us/sample - loss: 1.6006 - accuracy: 0.8129 - val_loss: 0.7374 - val_accuracy: 0.8236
Epoch 2/2
55000/55000 [==============================] - 7s 128us/sample - loss: 0.7179 - accuracy: 0.8265 - val_loss: 0.6905 - val_accuracy: 0.8356
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 139us/sample - loss: 0.5856 - accuracy: 0.7992 - val_loss: 0.3908 - val_accuracy: 0.8570
Epoch 2/2
55000/55000 [==============================] - 6s 117us/sample - loss: 0.4260 - accuracy: 0.8443 - val_loss: 0.3389 - val_accuracy: 0.8730
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
Train on 55000 samples
55000/55000 [==============================] - 2s 44us/sample - loss: 0.4186 - accuracy: 0.8451
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0) # std along rows (same class)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True) # subclassing Dropout
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 6s 114us/sample - loss: 0.4734 - accuracy: 0.8364 - val_loss: 0.3999 - val_accuracy: 0.8614
Epoch 2/2
55000/55000 [==============================] - 6s 100us/sample - loss: 0.3583 - accuracy: 0.8685 - val_loss: 0.3494 - val_accuracy: 0.8746
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3])) # flatten image
for _ in range(20): # add 20 dense layers
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
5000/5000 [==============================] - 0s 65us/sample - loss: 1.5099 - accuracy: 0.4736
###Markdown
The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 21s 466us/sample - loss: 1.8365 - accuracy: 0.3390 - val_loss: 1.6330 - val_accuracy: 0.4174
Epoch 2/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.6623 - accuracy: 0.4063 - val_loss: 1.5967 - val_accuracy: 0.4204
Epoch 3/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.5946 - accuracy: 0.4314 - val_loss: 1.5225 - val_accuracy: 0.4602
Epoch 4/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5417 - accuracy: 0.4551 - val_loss: 1.4680 - val_accuracy: 0.4756
Epoch 5/100
45000/45000 [==============================] - 17s 367us/sample - loss: 1.5013 - accuracy: 0.4678 - val_loss: 1.4378 - val_accuracy: 0.4862
Epoch 6/100
45000/45000 [==============================] - 16s 361us/sample - loss: 1.4637 - accuracy: 0.4797 - val_loss: 1.4221 - val_accuracy: 0.4982
Epoch 7/100
45000/45000 [==============================] - 16s 355us/sample - loss: 1.4361 - accuracy: 0.4921 - val_loss: 1.4133 - val_accuracy: 0.4968
Epoch 8/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.4078 - accuracy: 0.4998 - val_loss: 1.3916 - val_accuracy: 0.5040
Epoch 9/100
45000/45000 [==============================] - 14s 315us/sample - loss: 1.3811 - accuracy: 0.5104 - val_loss: 1.3695 - val_accuracy: 0.5116
Epoch 10/100
45000/45000 [==============================] - 14s 318us/sample - loss: 1.3571 - accuracy: 0.5205 - val_loss: 1.3701 - val_accuracy: 0.5112
Epoch 11/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.3367 - accuracy: 0.5246 - val_loss: 1.3549 - val_accuracy: 0.5196
Epoch 12/100
45000/45000 [==============================] - 14s 316us/sample - loss: 1.3158 - accuracy: 0.5322 - val_loss: 1.4038 - val_accuracy: 0.5048
Epoch 13/100
45000/45000 [==============================] - 15s 328us/sample - loss: 1.3028 - accuracy: 0.5392 - val_loss: 1.3453 - val_accuracy: 0.5242
Epoch 14/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2798 - accuracy: 0.5460 - val_loss: 1.3427 - val_accuracy: 0.5218
Epoch 15/100
45000/45000 [==============================] - 15s 327us/sample - loss: 1.2642 - accuracy: 0.5502 - val_loss: 1.3802 - val_accuracy: 0.5072
Epoch 16/100
45000/45000 [==============================] - 15s 336us/sample - loss: 1.2497 - accuracy: 0.5592 - val_loss: 1.3870 - val_accuracy: 0.5154
Epoch 17/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.2339 - accuracy: 0.5645 - val_loss: 1.3270 - val_accuracy: 0.5366
Epoch 18/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.2223 - accuracy: 0.5688 - val_loss: 1.3054 - val_accuracy: 0.5506
Epoch 19/100
45000/45000 [==============================] - 15s 339us/sample - loss: 1.2015 - accuracy: 0.5750 - val_loss: 1.3134 - val_accuracy: 0.5462
Epoch 20/100
45000/45000 [==============================] - 15s 326us/sample - loss: 1.1884 - accuracy: 0.5796 - val_loss: 1.3459 - val_accuracy: 0.5252
Epoch 21/100
45000/45000 [==============================] - 17s 370us/sample - loss: 1.1767 - accuracy: 0.5876 - val_loss: 1.3404 - val_accuracy: 0.5392
Epoch 22/100
45000/45000 [==============================] - 16s 366us/sample - loss: 1.1679 - accuracy: 0.5872 - val_loss: 1.3600 - val_accuracy: 0.5332
Epoch 23/100
45000/45000 [==============================] - 15s 337us/sample - loss: 1.1513 - accuracy: 0.5954 - val_loss: 1.3148 - val_accuracy: 0.5498
Epoch 24/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.1345 - accuracy: 0.6033 - val_loss: 1.3290 - val_accuracy: 0.5368
Epoch 25/100
45000/45000 [==============================] - 16s 350us/sample - loss: 1.1252 - accuracy: 0.6025 - val_loss: 1.3350 - val_accuracy: 0.5434
Epoch 26/100
45000/45000 [==============================] - 15s 341us/sample - loss: 1.1192 - accuracy: 0.6070 - val_loss: 1.3423 - val_accuracy: 0.5364
Epoch 27/100
45000/45000 [==============================] - 15s 342us/sample - loss: 1.1028 - accuracy: 0.6093 - val_loss: 1.3511 - val_accuracy: 0.5358
Epoch 28/100
45000/45000 [==============================] - 15s 332us/sample - loss: 1.0907 - accuracy: 0.6158 - val_loss: 1.3706 - val_accuracy: 0.5350
Epoch 29/100
45000/45000 [==============================] - 16s 345us/sample - loss: 1.0785 - accuracy: 0.6197 - val_loss: 1.3356 - val_accuracy: 0.5398
Epoch 30/100
45000/45000 [==============================] - 16s 352us/sample - loss: 1.0718 - accuracy: 0.6198 - val_loss: 1.3529 - val_accuracy: 0.5446
Epoch 31/100
45000/45000 [==============================] - 15s 333us/sample - loss: 1.0629 - accuracy: 0.6259 - val_loss: 1.3590 - val_accuracy: 0.5434
Epoch 32/100
45000/45000 [==============================] - 15s 331us/sample - loss: 1.0504 - accuracy: 0.6292 - val_loss: 1.3448 - val_accuracy: 0.5388
Epoch 33/100
45000/45000 [==============================] - 15s 325us/sample - loss: 1.0420 - accuracy: 0.6318 - val_loss: 1.3790 - val_accuracy: 0.5350
Epoch 34/100
45000/45000 [==============================] - 16s 346us/sample - loss: 1.0304 - accuracy: 0.6362 - val_loss: 1.3621 - val_accuracy: 0.5430
Epoch 35/100
45000/45000 [==============================] - 16s 356us/sample - loss: 1.0280 - accuracy: 0.6362 - val_loss: 1.3673 - val_accuracy: 0.5366
Epoch 36/100
45000/45000 [==============================] - 16s 354us/sample - loss: 1.0100 - accuracy: 0.6439 - val_loss: 1.3659 - val_accuracy: 0.5420
Epoch 37/100
45000/45000 [==============================] - 15s 329us/sample - loss: 1.0060 - accuracy: 0.6473 - val_loss: 1.3773 - val_accuracy: 0.5398
Epoch 38/100
45000/45000 [==============================] - 15s 332us/sample - loss: 0.9966 - accuracy: 0.6496 - val_loss: 1.3946 - val_accuracy: 0.5340
5000/5000 [==============================] - 1s 157us/sample - loss: 1.3054 - accuracy: 0.5506
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
5000/5000 [==============================] - 0s 74us/sample - loss: 1.4626 - accuracy: 0.5140
###Markdown
We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time). e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/100
45000/45000 [==============================] - 12s 263us/sample - loss: 1.8763 - accuracy: 0.3330 - val_loss: 1.7595 - val_accuracy: 0.3668
Epoch 2/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.6527 - accuracy: 0.4148 - val_loss: 1.7666 - val_accuracy: 0.3808
Epoch 3/100
45000/45000 [==============================] - 10s 219us/sample - loss: 1.5682 - accuracy: 0.4439 - val_loss: 1.6393 - val_accuracy: 0.4490
Epoch 4/100
45000/45000 [==============================] - 10s 211us/sample - loss: 1.5030 - accuracy: 0.4698 - val_loss: 1.6028 - val_accuracy: 0.4466
Epoch 5/100
45000/45000 [==============================] - 9s 209us/sample - loss: 1.4430 - accuracy: 0.4913 - val_loss: 1.5394 - val_accuracy: 0.4562
Epoch 6/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.4005 - accuracy: 0.5084 - val_loss: 1.5408 - val_accuracy: 0.4818
Epoch 7/100
45000/45000 [==============================] - 10s 216us/sample - loss: 1.3541 - accuracy: 0.5298 - val_loss: 1.5236 - val_accuracy: 0.4866
Epoch 8/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.3189 - accuracy: 0.5405 - val_loss: 1.5174 - val_accuracy: 0.4926
Epoch 9/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.2800 - accuracy: 0.5570 - val_loss: 1.5722 - val_accuracy: 0.4998
Epoch 10/100
45000/45000 [==============================] - 10s 214us/sample - loss: 1.2512 - accuracy: 0.5656 - val_loss: 1.4974 - val_accuracy: 0.5082
Epoch 11/100
45000/45000 [==============================] - 9s 203us/sample - loss: 1.2141 - accuracy: 0.5802 - val_loss: 1.6123 - val_accuracy: 0.4916
Epoch 12/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.1856 - accuracy: 0.5893 - val_loss: 1.5449 - val_accuracy: 0.5016
Epoch 13/100
45000/45000 [==============================] - 9s 204us/sample - loss: 1.1602 - accuracy: 0.5978 - val_loss: 1.6241 - val_accuracy: 0.5056
Epoch 14/100
45000/45000 [==============================] - 9s 199us/sample - loss: 1.1290 - accuracy: 0.6118 - val_loss: 1.6085 - val_accuracy: 0.4936
Epoch 15/100
45000/45000 [==============================] - 9s 198us/sample - loss: 1.1050 - accuracy: 0.6176 - val_loss: 1.6951 - val_accuracy: 0.4860
Epoch 16/100
45000/45000 [==============================] - 9s 201us/sample - loss: 1.0786 - accuracy: 0.6293 - val_loss: 1.5806 - val_accuracy: 0.5044
Epoch 17/100
45000/45000 [==============================] - 10s 212us/sample - loss: 1.0629 - accuracy: 0.6362 - val_loss: 1.5932 - val_accuracy: 0.4970
Epoch 18/100
45000/45000 [==============================] - 10s 215us/sample - loss: 1.0330 - accuracy: 0.6458 - val_loss: 1.5968 - val_accuracy: 0.5080
Epoch 19/100
45000/45000 [==============================] - 9s 195us/sample - loss: 1.0104 - accuracy: 0.6488 - val_loss: 1.6166 - val_accuracy: 0.5152
Epoch 20/100
45000/45000 [==============================] - 9s 206us/sample - loss: 0.9896 - accuracy: 0.6629 - val_loss: 1.6174 - val_accuracy: 0.5154
Epoch 21/100
45000/45000 [==============================] - 9s 211us/sample - loss: 0.9741 - accuracy: 0.6650 - val_loss: 1.7201 - val_accuracy: 0.5040
Epoch 22/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9475 - accuracy: 0.6769 - val_loss: 1.7498 - val_accuracy: 0.5176
Epoch 23/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.9346 - accuracy: 0.6780 - val_loss: 1.7491 - val_accuracy: 0.5020
Epoch 24/100
45000/45000 [==============================] - 10s 223us/sample - loss: 1.1878 - accuracy: 0.6792 - val_loss: 1.6664 - val_accuracy: 0.4906
Epoch 25/100
45000/45000 [==============================] - 10s 219us/sample - loss: 0.9851 - accuracy: 0.6646 - val_loss: 1.7358 - val_accuracy: 0.5086
Epoch 26/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.9053 - accuracy: 0.6911 - val_loss: 1.8361 - val_accuracy: 0.5094
Epoch 27/100
45000/45000 [==============================] - 10s 215us/sample - loss: 0.8681 - accuracy: 0.7048 - val_loss: 1.8487 - val_accuracy: 0.5036
Epoch 28/100
45000/45000 [==============================] - 10s 220us/sample - loss: 0.8460 - accuracy: 0.7132 - val_loss: 1.8516 - val_accuracy: 0.5068
Epoch 29/100
45000/45000 [==============================] - 10s 223us/sample - loss: 0.8258 - accuracy: 0.7208 - val_loss: 1.9383 - val_accuracy: 0.5094
Epoch 30/100
45000/45000 [==============================] - 10s 216us/sample - loss: 0.8106 - accuracy: 0.7248 - val_loss: 2.0527 - val_accuracy: 0.4974
5000/5000 [==============================] - 0s 71us/sample - loss: 1.4974 - accuracy: 0.5082
###Markdown
The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/15
45000/45000 [==============================] - 3s 69us/sample - loss: 2.0504 - accuracy: 0.2823 - val_loss: 1.7711 - val_accuracy: 0.3706
Epoch 2/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.7626 - accuracy: 0.3766 - val_loss: 1.7751 - val_accuracy: 0.3844
Epoch 3/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.6264 - accuracy: 0.4272 - val_loss: 1.6774 - val_accuracy: 0.4216
Epoch 4/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.5527 - accuracy: 0.4474 - val_loss: 1.6633 - val_accuracy: 0.4316
Epoch 5/15
45000/45000 [==============================] - 3s 59us/sample - loss: 1.4997 - accuracy: 0.4701 - val_loss: 1.5909 - val_accuracy: 0.4540
Epoch 6/15
45000/45000 [==============================] - 3s 60us/sample - loss: 1.4564 - accuracy: 0.4841 - val_loss: 1.5982 - val_accuracy: 0.4624
Epoch 7/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.4232 - accuracy: 0.4958 - val_loss: 1.6417 - val_accuracy: 0.4382
Epoch 8/15
45000/45000 [==============================] - 3s 58us/sample - loss: 1.3530 - accuracy: 0.5199 - val_loss: 1.5050 - val_accuracy: 0.4778
Epoch 9/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.2771 - accuracy: 0.5480 - val_loss: 1.5254 - val_accuracy: 0.4928
Epoch 10/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.2073 - accuracy: 0.5726 - val_loss: 1.5013 - val_accuracy: 0.5052
Epoch 11/15
45000/45000 [==============================] - 3s 57us/sample - loss: 1.1380 - accuracy: 0.5948 - val_loss: 1.4941 - val_accuracy: 0.5170
Epoch 12/15
45000/45000 [==============================] - 3s 56us/sample - loss: 1.0672 - accuracy: 0.6204 - val_loss: 1.5091 - val_accuracy: 0.5106
Epoch 13/15
45000/45000 [==============================] - 3s 56us/sample - loss: 0.9967 - accuracy: 0.6466 - val_loss: 1.5261 - val_accuracy: 0.5212
Epoch 14/15
45000/45000 [==============================] - 3s 58us/sample - loss: 0.9301 - accuracy: 0.6712 - val_loss: 1.5437 - val_accuracy: 0.5264
Epoch 15/15
45000/45000 [==============================] - 3s 59us/sample - loss: 0.8893 - accuracy: 0.6866 - val_loss: 1.5650 - val_accuracy: 0.5276
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0-preview.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0-preview is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
WARNING: Logging before flag parsing goes to stderr.
W0610 10:46:09.866298 140735810999168 deprecation.py:323] From /Users/ageron/miniconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1251: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 61us/sample - loss: 1.3460 - accuracy: 0.6233 - val_loss: 0.9251 - val_accuracy: 0.7208
Epoch 2/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.8208 - accuracy: 0.7359 - val_loss: 0.7318 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6974 - accuracy: 0.7695 - val_loss: 0.6500 - val_accuracy: 0.7886
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.6338 - accuracy: 0.7904 - val_loss: 0.6000 - val_accuracy: 0.8070
Epoch 5/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5920 - accuracy: 0.8045 - val_loss: 0.5662 - val_accuracy: 0.8172
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5620 - accuracy: 0.8138 - val_loss: 0.5416 - val_accuracy: 0.8230
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5393 - accuracy: 0.8203 - val_loss: 0.5218 - val_accuracy: 0.8302
Epoch 8/10
55000/55000 [==============================] - 3s 57us/sample - loss: 0.5216 - accuracy: 0.8248 - val_loss: 0.5051 - val_accuracy: 0.8340
Epoch 9/10
55000/55000 [==============================] - 3s 59us/sample - loss: 0.5069 - accuracy: 0.8289 - val_loss: 0.4923 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 3s 62us/sample - loss: 0.4948 - accuracy: 0.8322 - val_loss: 0.4847 - val_accuracy: 0.8372
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 35s 644us/sample - loss: 1.0197 - accuracy: 0.6154 - val_loss: 0.7386 - val_accuracy: 0.7348
Epoch 2/5
55000/55000 [==============================] - 33s 607us/sample - loss: 0.7149 - accuracy: 0.7401 - val_loss: 0.6187 - val_accuracy: 0.7774
Epoch 3/5
55000/55000 [==============================] - 32s 583us/sample - loss: 0.6193 - accuracy: 0.7803 - val_loss: 0.5926 - val_accuracy: 0.8036
Epoch 4/5
55000/55000 [==============================] - 32s 586us/sample - loss: 0.5555 - accuracy: 0.8043 - val_loss: 0.5208 - val_accuracy: 0.8262
Epoch 5/5
55000/55000 [==============================] - 32s 573us/sample - loss: 0.5159 - accuracy: 0.8238 - val_loss: 0.4790 - val_accuracy: 0.8358
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 18s 319us/sample - loss: 1.9174 - accuracy: 0.2242 - val_loss: 1.3856 - val_accuracy: 0.3846
Epoch 2/5
55000/55000 [==============================] - 15s 279us/sample - loss: 1.2147 - accuracy: 0.4750 - val_loss: 1.0691 - val_accuracy: 0.5510
Epoch 3/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.9576 - accuracy: 0.6025 - val_loss: 0.7688 - val_accuracy: 0.7036
Epoch 4/5
55000/55000 [==============================] - 15s 281us/sample - loss: 0.8116 - accuracy: 0.6762 - val_loss: 0.7276 - val_accuracy: 0.7288
Epoch 5/5
55000/55000 [==============================] - 15s 278us/sample - loss: 0.8167 - accuracy: 0.6862 - val_loss: 0.7697 - val_accuracy: 0.7032
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 85us/sample - loss: 0.8756 - accuracy: 0.7140 - val_loss: 0.5514 - val_accuracy: 0.8212
Epoch 2/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.5765 - accuracy: 0.8033 - val_loss: 0.4742 - val_accuracy: 0.8436
Epoch 3/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.5146 - accuracy: 0.8216 - val_loss: 0.4382 - val_accuracy: 0.8530
Epoch 4/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4821 - accuracy: 0.8322 - val_loss: 0.4170 - val_accuracy: 0.8604
Epoch 5/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4589 - accuracy: 0.8402 - val_loss: 0.4003 - val_accuracy: 0.8658
Epoch 6/10
55000/55000 [==============================] - 4s 75us/sample - loss: 0.4428 - accuracy: 0.8459 - val_loss: 0.3883 - val_accuracy: 0.8698
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4220 - accuracy: 0.8521 - val_loss: 0.3792 - val_accuracy: 0.8720
Epoch 8/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4150 - accuracy: 0.8546 - val_loss: 0.3696 - val_accuracy: 0.8754
Epoch 9/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4013 - accuracy: 0.8589 - val_loss: 0.3629 - val_accuracy: 0.8746
Epoch 10/10
55000/55000 [==============================] - 4s 74us/sample - loss: 0.3931 - accuracy: 0.8615 - val_loss: 0.3581 - val_accuracy: 0.8766
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.Activation("relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 5s 89us/sample - loss: 0.8617 - accuracy: 0.7095 - val_loss: 0.5649 - val_accuracy: 0.8102
Epoch 2/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.5803 - accuracy: 0.8015 - val_loss: 0.4833 - val_accuracy: 0.8344
Epoch 3/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.5153 - accuracy: 0.8208 - val_loss: 0.4463 - val_accuracy: 0.8462
Epoch 4/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4846 - accuracy: 0.8307 - val_loss: 0.4256 - val_accuracy: 0.8530
Epoch 5/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.4576 - accuracy: 0.8402 - val_loss: 0.4106 - val_accuracy: 0.8590
Epoch 6/10
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4401 - accuracy: 0.8467 - val_loss: 0.3973 - val_accuracy: 0.8610
Epoch 7/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4296 - accuracy: 0.8482 - val_loss: 0.3899 - val_accuracy: 0.8650
Epoch 8/10
55000/55000 [==============================] - 4s 76us/sample - loss: 0.4127 - accuracy: 0.8559 - val_loss: 0.3818 - val_accuracy: 0.8658
Epoch 9/10
55000/55000 [==============================] - 4s 78us/sample - loss: 0.4007 - accuracy: 0.8588 - val_loss: 0.3741 - val_accuracy: 0.8682
Epoch 10/10
55000/55000 [==============================] - 4s 79us/sample - loss: 0.3929 - accuracy: 0.8621 - val_loss: 0.3694 - val_accuracy: 0.8734
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5851 - accuracy: 0.6600 - val_loss: 0.5855 - val_accuracy: 0.6318
Epoch 2/4
200/200 [==============================] - 0s 303us/sample - loss: 0.5484 - accuracy: 0.6850 - val_loss: 0.5484 - val_accuracy: 0.6775
Epoch 3/4
200/200 [==============================] - 0s 294us/sample - loss: 0.5116 - accuracy: 0.7250 - val_loss: 0.5141 - val_accuracy: 0.7160
Epoch 4/4
200/200 [==============================] - 0s 316us/sample - loss: 0.4779 - accuracy: 0.7450 - val_loss: 0.4859 - val_accuracy: 0.7363
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3989 - accuracy: 0.8050 - val_loss: 0.3419 - val_accuracy: 0.8702
Epoch 2/16
200/200 [==============================] - 0s 328us/sample - loss: 0.2795 - accuracy: 0.9300 - val_loss: 0.2624 - val_accuracy: 0.9280
Epoch 3/16
200/200 [==============================] - 0s 319us/sample - loss: 0.2128 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9544
Epoch 4/16
200/200 [==============================] - 0s 318us/sample - loss: 0.1720 - accuracy: 0.9800 - val_loss: 0.1826 - val_accuracy: 0.9635
Epoch 5/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1436 - accuracy: 0.9800 - val_loss: 0.1586 - val_accuracy: 0.9736
Epoch 6/16
200/200 [==============================] - 0s 317us/sample - loss: 0.1231 - accuracy: 0.9850 - val_loss: 0.1407 - val_accuracy: 0.9807
Epoch 7/16
200/200 [==============================] - 0s 325us/sample - loss: 0.1074 - accuracy: 0.9900 - val_loss: 0.1270 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 326us/sample - loss: 0.0953 - accuracy: 0.9950 - val_loss: 0.1158 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0854 - accuracy: 1.0000 - val_loss: 0.1076 - val_accuracy: 0.9878
Epoch 10/16
200/200 [==============================] - 0s 322us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1007 - val_accuracy: 0.9888
Epoch 11/16
200/200 [==============================] - 0s 316us/sample - loss: 0.0718 - accuracy: 1.0000 - val_loss: 0.0944 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 319us/sample - loss: 0.0662 - accuracy: 1.0000 - val_loss: 0.0891 - val_accuracy: 0.9899
Epoch 13/16
200/200 [==============================] - 0s 318us/sample - loss: 0.0613 - accuracy: 1.0000 - val_loss: 0.0846 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 332us/sample - loss: 0.0574 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0538 - accuracy: 1.0000 - val_loss: 0.0770 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 320us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0740 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 38us/sample - loss: 0.0689 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of almost 4!
###Code
(100 - 97.05) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.4887 - accuracy: 0.8282 - val_loss: 0.4245 - val_accuracy: 0.8526
Epoch 2/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3830 - accuracy: 0.8641 - val_loss: 0.3798 - val_accuracy: 0.8688
Epoch 3/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.3491 - accuracy: 0.8758 - val_loss: 0.3650 - val_accuracy: 0.8730
Epoch 4/25
55000/55000 [==============================] - 4s 78us/sample - loss: 0.3267 - accuracy: 0.8839 - val_loss: 0.3564 - val_accuracy: 0.8746
Epoch 5/25
55000/55000 [==============================] - 4s 72us/sample - loss: 0.3102 - accuracy: 0.8893 - val_loss: 0.3493 - val_accuracy: 0.8770
Epoch 6/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2969 - accuracy: 0.8939 - val_loss: 0.3400 - val_accuracy: 0.8818
Epoch 7/25
55000/55000 [==============================] - 4s 77us/sample - loss: 0.2855 - accuracy: 0.8983 - val_loss: 0.3385 - val_accuracy: 0.8830
Epoch 8/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2764 - accuracy: 0.9025 - val_loss: 0.3372 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2684 - accuracy: 0.9039 - val_loss: 0.3337 - val_accuracy: 0.8848
Epoch 10/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2613 - accuracy: 0.9072 - val_loss: 0.3277 - val_accuracy: 0.8862
Epoch 11/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2555 - accuracy: 0.9086 - val_loss: 0.3273 - val_accuracy: 0.8860
Epoch 12/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2500 - accuracy: 0.9111 - val_loss: 0.3244 - val_accuracy: 0.8840
Epoch 13/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2454 - accuracy: 0.9124 - val_loss: 0.3194 - val_accuracy: 0.8904
Epoch 14/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2414 - accuracy: 0.9141 - val_loss: 0.3226 - val_accuracy: 0.8884
Epoch 15/25
55000/55000 [==============================] - 4s 73us/sample - loss: 0.2378 - accuracy: 0.9160 - val_loss: 0.3233 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2347 - accuracy: 0.9174 - val_loss: 0.3207 - val_accuracy: 0.8904
Epoch 17/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2318 - accuracy: 0.9179 - val_loss: 0.3195 - val_accuracy: 0.8892
Epoch 18/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2293 - accuracy: 0.9193 - val_loss: 0.3184 - val_accuracy: 0.8916
Epoch 19/25
55000/55000 [==============================] - 4s 67us/sample - loss: 0.2272 - accuracy: 0.9201 - val_loss: 0.3196 - val_accuracy: 0.8886
Epoch 20/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2253 - accuracy: 0.9206 - val_loss: 0.3190 - val_accuracy: 0.8918
Epoch 21/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2235 - accuracy: 0.9214 - val_loss: 0.3176 - val_accuracy: 0.8912
Epoch 22/25
55000/55000 [==============================] - 4s 69us/sample - loss: 0.2220 - accuracy: 0.9220 - val_loss: 0.3181 - val_accuracy: 0.8900
Epoch 23/25
55000/55000 [==============================] - 4s 71us/sample - loss: 0.2206 - accuracy: 0.9226 - val_loss: 0.3187 - val_accuracy: 0.8894
Epoch 24/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2193 - accuracy: 0.9231 - val_loss: 0.3168 - val_accuracy: 0.8908
Epoch 25/25
55000/55000 [==============================] - 4s 68us/sample - loss: 0.2181 - accuracy: 0.9234 - val_loss: 0.3171 - val_accuracy: 0.8898
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/25
55000/55000 [==============================] - 2s 30us/sample - loss: 0.4926 - accuracy: 0.8268 - val_loss: 0.4229 - val_accuracy: 0.8520
Epoch 2/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.3754 - accuracy: 0.8669 - val_loss: 0.3833 - val_accuracy: 0.8634
Epoch 3/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3433 - accuracy: 0.8776 - val_loss: 0.3687 - val_accuracy: 0.8666
Epoch 4/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.3198 - accuracy: 0.8854 - val_loss: 0.3595 - val_accuracy: 0.8738
Epoch 5/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.3011 - accuracy: 0.8920 - val_loss: 0.3421 - val_accuracy: 0.8764
Epoch 6/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2873 - accuracy: 0.8973 - val_loss: 0.3371 - val_accuracy: 0.8814
Epoch 7/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2738 - accuracy: 0.9026 - val_loss: 0.3312 - val_accuracy: 0.8842
Epoch 8/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2633 - accuracy: 0.9071 - val_loss: 0.3338 - val_accuracy: 0.8824
Epoch 9/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2543 - accuracy: 0.9098 - val_loss: 0.3296 - val_accuracy: 0.8840
Epoch 10/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2465 - accuracy: 0.9125 - val_loss: 0.3233 - val_accuracy: 0.8874
Epoch 11/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2406 - accuracy: 0.9157 - val_loss: 0.3215 - val_accuracy: 0.8874
Epoch 12/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9173 - val_loss: 0.3237 - val_accuracy: 0.8862
Epoch 13/25
55000/55000 [==============================] - 2s 27us/sample - loss: 0.2370 - accuracy: 0.9160 - val_loss: 0.3282 - val_accuracy: 0.8856
Epoch 14/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2366 - accuracy: 0.9157 - val_loss: 0.3228 - val_accuracy: 0.8874
Epoch 15/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2362 - accuracy: 0.9162 - val_loss: 0.3261 - val_accuracy: 0.8860
Epoch 16/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.2339 - accuracy: 0.9167 - val_loss: 0.3336 - val_accuracy: 0.8830
Epoch 17/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2319 - accuracy: 0.9166 - val_loss: 0.3316 - val_accuracy: 0.8818
Epoch 18/25
55000/55000 [==============================] - 1s 26us/sample - loss: 0.2295 - accuracy: 0.9181 - val_loss: 0.3424 - val_accuracy: 0.8786
Epoch 19/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2266 - accuracy: 0.9186 - val_loss: 0.3356 - val_accuracy: 0.8844
Epoch 20/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2250 - accuracy: 0.9186 - val_loss: 0.3486 - val_accuracy: 0.8758
Epoch 21/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2221 - accuracy: 0.9189 - val_loss: 0.3443 - val_accuracy: 0.8856
Epoch 22/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2184 - accuracy: 0.9201 - val_loss: 0.3889 - val_accuracy: 0.8700
Epoch 23/25
55000/55000 [==============================] - 1s 27us/sample - loss: 0.2040 - accuracy: 0.9266 - val_loss: 0.3216 - val_accuracy: 0.8910
Epoch 24/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1750 - accuracy: 0.9401 - val_loss: 0.3153 - val_accuracy: 0.8932
Epoch 25/25
55000/55000 [==============================] - 2s 28us/sample - loss: 0.1718 - accuracy: 0.9416 - val_loss: 0.3153 - val_accuracy: 0.8940
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 7s 129us/sample - loss: 1.6597 - accuracy: 0.8128 - val_loss: 0.7630 - val_accuracy: 0.8080
Epoch 2/2
55000/55000 [==============================] - 7s 124us/sample - loss: 0.7176 - accuracy: 0.8271 - val_loss: 0.6848 - val_accuracy: 0.8360
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 145us/sample - loss: 0.5741 - accuracy: 0.8030 - val_loss: 0.3841 - val_accuracy: 0.8572
Epoch 2/2
55000/55000 [==============================] - 7s 134us/sample - loss: 0.4218 - accuracy: 0.8469 - val_loss: 0.3534 - val_accuracy: 0.8728
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
_____no_output_____
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/2
55000/55000 [==============================] - 8s 147us/sample - loss: 0.4745 - accuracy: 0.8329 - val_loss: 0.3988 - val_accuracy: 0.8584
Epoch 2/2
55000/55000 [==============================] - 7s 135us/sample - loss: 0.3554 - accuracy: 0.8688 - val_loss: 0.3681 - val_accuracy: 0.8726
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning 8.1. _Exercise: Build a DNN with five hidden layers of 100 neurons each, He initialization, and the ELU activation function._
###Code
###Output
_____no_output_____
###Markdown
8.2. _Exercise: Using Adam optimization and early stopping, try training it on MNIST but only on digits 0 to 4, as we will use transfer learning for digits 5 to 9 in the next exercise. You will need a softmax output layer with five neurons, and as always make sure to save checkpoints at regular intervals and save the final model so you can reuse it later._
###Code
###Output
_____no_output_____
###Markdown
8.3. _Exercise: Tune the hyperparameters using cross-validation and see what precision you can achieve._
###Code
###Output
_____no_output_____
###Markdown
8.4. _Exercise: Now try adding Batch Normalization and compare the learning curves: is it converging faster than before? Does it produce a better model?_
###Code
###Output
_____no_output_____
###Markdown
8.5. _Exercise: is the model overfitting the training set? Try adding dropout to every layer and try again. Does it help?_
###Code
###Output
_____no_output_____
###Markdown
9. Transfer learning 9.1. _Exercise: create a new DNN that reuses all the pretrained hidden layers of the previous model, freezes them, and replaces the softmax output layer with a new one._
###Code
###Output
_____no_output_____
###Markdown
9.2. _Exercise: train this new DNN on digits 5 to 9, using only 100 images per digit, and time how long it takes. Despite this small number of examples, can you achieve high precision?_
###Code
###Output
_____no_output_____
###Markdown
9.3. _Exercise: try caching the frozen layers, and train the model again: how much faster is it now?_
###Code
###Output
_____no_output_____
###Markdown
9.4. _Exercise: try again reusing just four hidden layers instead of five. Can you achieve a higher precision?_
###Code
###Output
_____no_output_____
###Markdown
9.5. _Exercise: now unfreeze the top two hidden layers and continue training: can you get the model to perform even better?_
###Code
###Output
_____no_output_____
###Markdown
10. Pretraining on an auxiliary task In this exercise you will build a DNN that compares two MNIST digit images and predicts whether they represent the same digit or not. Then you will reuse the lower layers of this network to train an MNIST classifier using very little training data. 10.1.Exercise: _Start by building two DNNs (let's call them DNN A and B), both similar to the one you built earlier but without the output layer: each DNN should have five hidden layers of 100 neurons each, He initialization, and ELU activation. Next, add one more hidden layer with 10 units on top of both DNNs. You should use the `keras.layers.concatenate()` function to concatenate the outputs of both DNNs, then feed the result to the hidden layer. Finally, add an output layer with a single neuron using the logistic activation function._
###Code
###Output
_____no_output_____
###Markdown
10.2._Exercise: split the MNIST training set in two sets: split 1 should containing 55,000 images, and split 2 should contain contain 5,000 images. Create a function that generates a training batch where each instance is a pair of MNIST images picked from split 1. Half of the training instances should be pairs of images that belong to the same class, while the other half should be images from different classes. For each pair, the training label should be 0 if the images are from the same class, or 1 if they are from different classes._
###Code
###Output
_____no_output_____
###Markdown
10.3._Exercise: train the DNN on this training set. For each image pair, you can simultaneously feed the first image to DNN A and the second image to DNN B. The whole network will gradually learn to tell whether two images belong to the same class or not._
###Code
###Output
_____no_output_____
###Markdown
10.4._Exercise: now create a new DNN by reusing and freezing the hidden layers of DNN A and adding a softmax output layer on top with 10 neurons. Train this network on split 2 and see if you can achieve high performance despite having only 500 images per class._
###Code
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 2s 41us/sample - loss: 1.2810 - accuracy: 0.6205 - val_loss: 0.8869 - val_accuracy: 0.7160
Epoch 2/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.7952 - accuracy: 0.7369 - val_loss: 0.7132 - val_accuracy: 0.7626
Epoch 3/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6817 - accuracy: 0.7726 - val_loss: 0.6385 - val_accuracy: 0.7894
Epoch 4/10
55000/55000 [==============================] - 2s 37us/sample - loss: 0.6219 - accuracy: 0.7942 - val_loss: 0.5931 - val_accuracy: 0.8016
Epoch 5/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5830 - accuracy: 0.8074 - val_loss: 0.5607 - val_accuracy: 0.8170
Epoch 6/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5552 - accuracy: 0.8172 - val_loss: 0.5355 - val_accuracy: 0.8238
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5339 - accuracy: 0.8226 - val_loss: 0.5166 - val_accuracy: 0.8298
Epoch 8/10
55000/55000 [==============================] - 2s 43us/sample - loss: 0.5173 - accuracy: 0.8262 - val_loss: 0.5043 - val_accuracy: 0.8356
Epoch 9/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.5039 - accuracy: 0.8306 - val_loss: 0.4889 - val_accuracy: 0.8384
Epoch 10/10
55000/55000 [==============================] - 2s 38us/sample - loss: 0.4923 - accuracy: 0.8333 - val_loss: 0.4816 - val_accuracy: 0.8394
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 47us/sample - loss: 1.3452 - accuracy: 0.6203 - val_loss: 0.9241 - val_accuracy: 0.7170
Epoch 2/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.8196 - accuracy: 0.7364 - val_loss: 0.7314 - val_accuracy: 0.7600
Epoch 3/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.6970 - accuracy: 0.7701 - val_loss: 0.6517 - val_accuracy: 0.7880
Epoch 4/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.6333 - accuracy: 0.7914 - val_loss: 0.6032 - val_accuracy: 0.8050
Epoch 5/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5916 - accuracy: 0.8049 - val_loss: 0.5689 - val_accuracy: 0.8162
Epoch 6/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5619 - accuracy: 0.8143 - val_loss: 0.5416 - val_accuracy: 0.8222
Epoch 7/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.5391 - accuracy: 0.8208 - val_loss: 0.5213 - val_accuracy: 0.8300
Epoch 8/10
55000/55000 [==============================] - 2s 41us/sample - loss: 0.5214 - accuracy: 0.8258 - val_loss: 0.5075 - val_accuracy: 0.8348
Epoch 9/10
55000/55000 [==============================] - 2s 42us/sample - loss: 0.5070 - accuracy: 0.8287 - val_loss: 0.4917 - val_accuracy: 0.8380
Epoch 10/10
55000/55000 [==============================] - 2s 40us/sample - loss: 0.4946 - accuracy: 0.8322 - val_loss: 0.4839 - val_accuracy: 0.8378
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 13s 238us/sample - loss: 1.1277 - accuracy: 0.5573 - val_loss: 0.8152 - val_accuracy: 0.6700
Epoch 2/5
55000/55000 [==============================] - 11s 198us/sample - loss: 0.6935 - accuracy: 0.7383 - val_loss: 0.5806 - val_accuracy: 0.7928
Epoch 3/5
55000/55000 [==============================] - 11s 196us/sample - loss: 0.5871 - accuracy: 0.7865 - val_loss: 0.6876 - val_accuracy: 0.7462
Epoch 4/5
55000/55000 [==============================] - 11s 199us/sample - loss: 0.5281 - accuracy: 0.8134 - val_loss: 0.5236 - val_accuracy: 0.8230
Epoch 5/5
55000/55000 [==============================] - 11s 201us/sample - loss: 0.4824 - accuracy: 0.8327 - val_loss: 0.5201 - val_accuracy: 0.8312
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/5
55000/55000 [==============================] - 12s 213us/sample - loss: 1.7518 - accuracy: 0.2797 - val_loss: 1.2328 - val_accuracy: 0.4720
Epoch 2/5
55000/55000 [==============================] - 10s 177us/sample - loss: 1.1922 - accuracy: 0.4982 - val_loss: 1.0247 - val_accuracy: 0.5354
Epoch 3/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.9390 - accuracy: 0.6180 - val_loss: 1.0809 - val_accuracy: 0.5118
Epoch 4/5
55000/55000 [==============================] - 10s 178us/sample - loss: 0.7787 - accuracy: 0.6937 - val_loss: 0.7067 - val_accuracy: 0.7344
Epoch 5/5
55000/55000 [==============================] - 10s 180us/sample - loss: 0.7465 - accuracy: 0.7122 - val_loss: 0.9720 - val_accuracy: 0.5702
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
bn1.updates
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 3s 63us/sample - loss: 0.8760 - accuracy: 0.7122 - val_loss: 0.5509 - val_accuracy: 0.8224
Epoch 2/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5737 - accuracy: 0.8039 - val_loss: 0.4723 - val_accuracy: 0.8460
Epoch 3/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.5143 - accuracy: 0.8231 - val_loss: 0.4376 - val_accuracy: 0.8570
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4826 - accuracy: 0.8333 - val_loss: 0.4135 - val_accuracy: 0.8638
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4571 - accuracy: 0.8415 - val_loss: 0.3990 - val_accuracy: 0.8654
Epoch 6/10
55000/55000 [==============================] - 3s 53us/sample - loss: 0.4432 - accuracy: 0.8456 - val_loss: 0.3870 - val_accuracy: 0.8710
Epoch 7/10
55000/55000 [==============================] - 3s 56us/sample - loss: 0.4255 - accuracy: 0.8515 - val_loss: 0.3782 - val_accuracy: 0.8698
Epoch 8/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4150 - accuracy: 0.8536 - val_loss: 0.3708 - val_accuracy: 0.8758
Epoch 9/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4016 - accuracy: 0.8596 - val_loss: 0.3634 - val_accuracy: 0.8750
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3915 - accuracy: 0.8629 - val_loss: 0.3601 - val_accuracy: 0.8758
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 4s 64us/sample - loss: 0.8656 - accuracy: 0.7094 - val_loss: 0.5650 - val_accuracy: 0.8098
Epoch 2/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5766 - accuracy: 0.8018 - val_loss: 0.4834 - val_accuracy: 0.8358
Epoch 3/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.5184 - accuracy: 0.8216 - val_loss: 0.4461 - val_accuracy: 0.8470
Epoch 4/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4852 - accuracy: 0.8314 - val_loss: 0.4226 - val_accuracy: 0.8558
Epoch 5/10
55000/55000 [==============================] - 3s 54us/sample - loss: 0.4579 - accuracy: 0.8399 - val_loss: 0.4086 - val_accuracy: 0.8604
Epoch 6/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4406 - accuracy: 0.8457 - val_loss: 0.3974 - val_accuracy: 0.8640
Epoch 7/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4263 - accuracy: 0.8498 - val_loss: 0.3883 - val_accuracy: 0.8676
Epoch 8/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4152 - accuracy: 0.8530 - val_loss: 0.3803 - val_accuracy: 0.8682
Epoch 9/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.4032 - accuracy: 0.8564 - val_loss: 0.3738 - val_accuracy: 0.8718
Epoch 10/10
55000/55000 [==============================] - 3s 55us/sample - loss: 0.3937 - accuracy: 0.8623 - val_loss: 0.3690 - val_accuracy: 0.8732
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Train on 200 samples, validate on 986 samples
Epoch 1/4
200/200 [==============================] - 0s 2ms/sample - loss: 0.5619 - accuracy: 0.6650 - val_loss: 0.5669 - val_accuracy: 0.6531
Epoch 2/4
200/200 [==============================] - 0s 208us/sample - loss: 0.5249 - accuracy: 0.7200 - val_loss: 0.5337 - val_accuracy: 0.6957
Epoch 3/4
200/200 [==============================] - 0s 200us/sample - loss: 0.4923 - accuracy: 0.7400 - val_loss: 0.5039 - val_accuracy: 0.7211
Epoch 4/4
200/200 [==============================] - 0s 214us/sample - loss: 0.4630 - accuracy: 0.7550 - val_loss: 0.4773 - val_accuracy: 0.7383
Train on 200 samples, validate on 986 samples
Epoch 1/16
200/200 [==============================] - 0s 2ms/sample - loss: 0.3864 - accuracy: 0.8200 - val_loss: 0.3357 - val_accuracy: 0.8661
Epoch 2/16
200/200 [==============================] - 0s 207us/sample - loss: 0.2701 - accuracy: 0.9350 - val_loss: 0.2608 - val_accuracy: 0.9249
Epoch 3/16
200/200 [==============================] - 0s 226us/sample - loss: 0.2082 - accuracy: 0.9650 - val_loss: 0.2150 - val_accuracy: 0.9503
Epoch 4/16
200/200 [==============================] - 0s 212us/sample - loss: 0.1695 - accuracy: 0.9800 - val_loss: 0.1840 - val_accuracy: 0.9625
Epoch 5/16
200/200 [==============================] - 0s 226us/sample - loss: 0.1428 - accuracy: 0.9800 - val_loss: 0.1602 - val_accuracy: 0.9706
Epoch 6/16
200/200 [==============================] - 0s 236us/sample - loss: 0.1221 - accuracy: 0.9850 - val_loss: 0.1424 - val_accuracy: 0.9797
Epoch 7/16
200/200 [==============================] - 0s 218us/sample - loss: 0.1067 - accuracy: 0.9950 - val_loss: 0.1293 - val_accuracy: 0.9828
Epoch 8/16
200/200 [==============================] - 0s 229us/sample - loss: 0.0952 - accuracy: 0.9950 - val_loss: 0.1186 - val_accuracy: 0.9848
Epoch 9/16
200/200 [==============================] - 0s 224us/sample - loss: 0.0858 - accuracy: 0.9950 - val_loss: 0.1099 - val_accuracy: 0.9848
Epoch 10/16
200/200 [==============================] - 0s 241us/sample - loss: 0.0781 - accuracy: 1.0000 - val_loss: 0.1026 - val_accuracy: 0.9878
Epoch 11/16
200/200 [==============================] - 0s 234us/sample - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0964 - val_accuracy: 0.9888
Epoch 12/16
200/200 [==============================] - 0s 222us/sample - loss: 0.0664 - accuracy: 1.0000 - val_loss: 0.0906 - val_accuracy: 0.9888
Epoch 13/16
200/200 [==============================] - 0s 228us/sample - loss: 0.0614 - accuracy: 1.0000 - val_loss: 0.0862 - val_accuracy: 0.9899
Epoch 14/16
200/200 [==============================] - 0s 225us/sample - loss: 0.0575 - accuracy: 1.0000 - val_loss: 0.0818 - val_accuracy: 0.9899
Epoch 15/16
200/200 [==============================] - 0s 219us/sample - loss: 0.0537 - accuracy: 1.0000 - val_loss: 0.0782 - val_accuracy: 0.9899
Epoch 16/16
200/200 [==============================] - 0s 221us/sample - loss: 0.0505 - accuracy: 1.0000 - val_loss: 0.0752 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
2000/2000 [==============================] - 0s 25us/sample - loss: 0.0697 - accuracy: 0.9925
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!
###Code
(100 - 96.95) / (100 - 99.25)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
**Chapter 11 – Training Deep Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 11._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
%load_ext tensorboard
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Vanishing/Exploding Gradients Problem
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
z = np.linspace(-5, 5, 200)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [1, 1], 'k--')
plt.plot([0, 0], [-0.2, 1.2], 'k-')
plt.plot([-5, 5], [-3/4, 7/4], 'g--')
plt.plot(z, logit(z), "b-", linewidth=2)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props, fontsize=14, ha="center")
plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props, fontsize=14, ha="center")
plt.grid(True)
plt.title("Sigmoid activation function", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("sigmoid_saturation_plot")
plt.show()
###Output
Saving figure sigmoid_saturation_plot
###Markdown
Xavier and He Initialization
###Code
[name for name in dir(keras.initializers) if not name.startswith("_")]
keras.layers.Dense(10, activation="relu", kernel_initializer="he_normal")
init = keras.initializers.VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
keras.layers.Dense(10, activation="relu", kernel_initializer=init)
###Output
_____no_output_____
###Markdown
Nonsaturating Activation Functions Leaky ReLU
###Code
def leaky_relu(z, alpha=0.01):
return np.maximum(alpha*z, z)
plt.plot(z, leaky_relu(z, 0.05), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([0, 0], [-0.5, 4.2], 'k-')
plt.grid(True)
props = dict(facecolor='black', shrink=0.1)
plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.2), arrowprops=props, fontsize=14, ha="center")
plt.title("Leaky ReLU activation function", fontsize=14)
plt.axis([-5, 5, -0.5, 4.2])
save_fig("leaky_relu_plot")
plt.show()
[m for m in dir(keras.activations) if not m.startswith("_")]
[m for m in dir(keras.layers) if "relu" in m.lower()]
###Output
_____no_output_____
###Markdown
Let's train a neural network on Fashion MNIST using the Leaky ReLU:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.0
X_test = X_test / 255.0
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.LeakyReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6314 - accuracy: 0.5054 - val_loss: 0.8886 - val_accuracy: 0.7160
Epoch 2/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.8416 - accuracy: 0.7247 - val_loss: 0.7130 - val_accuracy: 0.7656
Epoch 3/10
1719/1719 [==============================] - 2s 879us/step - loss: 0.7053 - accuracy: 0.7637 - val_loss: 0.6427 - val_accuracy: 0.7898
Epoch 4/10
1719/1719 [==============================] - 2s 883us/step - loss: 0.6325 - accuracy: 0.7908 - val_loss: 0.5900 - val_accuracy: 0.8066
Epoch 5/10
1719/1719 [==============================] - 2s 887us/step - loss: 0.5992 - accuracy: 0.8021 - val_loss: 0.5582 - val_accuracy: 0.8200
Epoch 6/10
1719/1719 [==============================] - 2s 881us/step - loss: 0.5624 - accuracy: 0.8142 - val_loss: 0.5350 - val_accuracy: 0.8238
Epoch 7/10
1719/1719 [==============================] - 2s 892us/step - loss: 0.5379 - accuracy: 0.8217 - val_loss: 0.5157 - val_accuracy: 0.8304
Epoch 8/10
1719/1719 [==============================] - 2s 895us/step - loss: 0.5152 - accuracy: 0.8295 - val_loss: 0.5078 - val_accuracy: 0.8284
Epoch 9/10
1719/1719 [==============================] - 2s 911us/step - loss: 0.5100 - accuracy: 0.8268 - val_loss: 0.4895 - val_accuracy: 0.8390
Epoch 10/10
1719/1719 [==============================] - 2s 897us/step - loss: 0.4918 - accuracy: 0.8340 - val_loss: 0.4817 - val_accuracy: 0.8396
###Markdown
Now let's try PReLU:
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(100, kernel_initializer="he_normal"),
keras.layers.PReLU(),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 2s 1ms/step - loss: 1.6969 - accuracy: 0.4974 - val_loss: 0.9255 - val_accuracy: 0.7186
Epoch 2/10
1719/1719 [==============================] - 2s 990us/step - loss: 0.8706 - accuracy: 0.7247 - val_loss: 0.7305 - val_accuracy: 0.7630
Epoch 3/10
1719/1719 [==============================] - 2s 980us/step - loss: 0.7211 - accuracy: 0.7621 - val_loss: 0.6564 - val_accuracy: 0.7882
Epoch 4/10
1719/1719 [==============================] - 2s 985us/step - loss: 0.6447 - accuracy: 0.7879 - val_loss: 0.6003 - val_accuracy: 0.8048
Epoch 5/10
1719/1719 [==============================] - 2s 967us/step - loss: 0.6077 - accuracy: 0.8004 - val_loss: 0.5656 - val_accuracy: 0.8182
Epoch 6/10
1719/1719 [==============================] - 2s 984us/step - loss: 0.5692 - accuracy: 0.8118 - val_loss: 0.5406 - val_accuracy: 0.8236
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5428 - accuracy: 0.8194 - val_loss: 0.5196 - val_accuracy: 0.8314
Epoch 8/10
1719/1719 [==============================] - 2s 983us/step - loss: 0.5193 - accuracy: 0.8284 - val_loss: 0.5113 - val_accuracy: 0.8316
Epoch 9/10
1719/1719 [==============================] - 2s 992us/step - loss: 0.5128 - accuracy: 0.8272 - val_loss: 0.4916 - val_accuracy: 0.8378
Epoch 10/10
1719/1719 [==============================] - 2s 988us/step - loss: 0.4941 - accuracy: 0.8314 - val_loss: 0.4826 - val_accuracy: 0.8398
###Markdown
ELU
###Code
def elu(z, alpha=1):
return np.where(z < 0, alpha * (np.exp(z) - 1), z)
plt.plot(z, elu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1, -1], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title(r"ELU activation function ($\alpha=1$)", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("elu_plot")
plt.show()
###Output
Saving figure elu_plot
###Markdown
Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:
###Code
keras.layers.Dense(10, activation="elu")
###Output
_____no_output_____
###Markdown
SELU This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017. During training, a neural network composed exclusively of a stack of dense layers using the SELU activation function and LeCun initialization will self-normalize: the output of each layer will tend to preserve the same mean and variance during training, which solves the vanishing/exploding gradients problem. As a result, this activation function outperforms the other activation functions very significantly for such neural nets, so you should really try it out. Unfortunately, the self-normalizing property of the SELU activation function is easily broken: you cannot use ℓ1 or ℓ2 regularization, regular dropout, max-norm, skip connections or other non-sequential topologies (so recurrent neural networks won't self-normalize). However, in practice it works quite well with sequential CNNs. If you break self-normalization, SELU will not necessarily outperform other activation functions.
###Code
from scipy.special import erfc
# alpha and scale to self normalize with mean 0 and standard deviation 1
# (see equation 14 in the paper):
alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1/np.sqrt(2)) * np.exp(1/2) - 1)
scale_0_1 = (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e)) * np.sqrt(2 * np.pi) * (2 * erfc(np.sqrt(2))*np.e**2 + np.pi*erfc(1/np.sqrt(2))**2*np.e - 2*(2+np.pi)*erfc(1/np.sqrt(2))*np.sqrt(np.e)+np.pi+2)**(-1/2)
def selu(z, scale=scale_0_1, alpha=alpha_0_1):
return scale * elu(z, alpha)
plt.plot(z, selu(z), "b-", linewidth=2)
plt.plot([-5, 5], [0, 0], 'k-')
plt.plot([-5, 5], [-1.758, -1.758], 'k--')
plt.plot([0, 0], [-2.2, 3.2], 'k-')
plt.grid(True)
plt.title("SELU activation function", fontsize=14)
plt.axis([-5, 5, -2.2, 3.2])
save_fig("selu_plot")
plt.show()
###Output
Saving figure selu_plot
###Markdown
By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:
###Code
np.random.seed(42)
Z = np.random.normal(size=(500, 100)) # standardized inputs
for layer in range(1000):
W = np.random.normal(size=(100, 100), scale=np.sqrt(1 / 100)) # LeCun initialization
Z = selu(np.dot(Z, W))
means = np.mean(Z, axis=0).mean()
stds = np.std(Z, axis=0).mean()
if layer % 100 == 0:
print("Layer {}: mean {:.2f}, std deviation {:.2f}".format(layer, means, stds))
###Output
Layer 0: mean -0.00, std deviation 1.00
Layer 100: mean 0.02, std deviation 0.96
Layer 200: mean 0.01, std deviation 0.90
Layer 300: mean -0.02, std deviation 0.92
Layer 400: mean 0.05, std deviation 0.89
Layer 500: mean 0.01, std deviation 0.93
Layer 600: mean 0.02, std deviation 0.92
Layer 700: mean -0.02, std deviation 0.90
Layer 800: mean 0.05, std deviation 0.83
Layer 900: mean 0.02, std deviation 1.00
###Markdown
Using SELU is easy:
###Code
keras.layers.Dense(10, activation="selu",
kernel_initializer="lecun_normal")
###Output
_____no_output_____
###Markdown
Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="selu",
kernel_initializer="lecun_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="selu",
kernel_initializer="lecun_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:
###Code
pixel_means = X_train.mean(axis=0, keepdims=True)
pixel_stds = X_train.std(axis=0, keepdims=True)
X_train_scaled = (X_train - pixel_means) / pixel_stds
X_valid_scaled = (X_valid - pixel_means) / pixel_stds
X_test_scaled = (X_test - pixel_means) / pixel_stds
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 12s 6ms/step - loss: 1.3556 - accuracy: 0.4808 - val_loss: 0.7711 - val_accuracy: 0.6858
Epoch 2/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7537 - accuracy: 0.7235 - val_loss: 0.7534 - val_accuracy: 0.7384
Epoch 3/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.7451 - accuracy: 0.7357 - val_loss: 0.5943 - val_accuracy: 0.7834
Epoch 4/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5699 - accuracy: 0.7906 - val_loss: 0.5434 - val_accuracy: 0.8066
Epoch 5/5
1719/1719 [==============================] - 9s 5ms/step - loss: 0.5569 - accuracy: 0.8051 - val_loss: 0.4907 - val_accuracy: 0.8218
###Markdown
Now look at what happens if we try to use the ReLU activation function instead:
###Code
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu", kernel_initializer="he_normal"))
for layer in range(99):
model.add(keras.layers.Dense(100, activation="relu", kernel_initializer="he_normal"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/5
1719/1719 [==============================] - 11s 5ms/step - loss: 2.0460 - accuracy: 0.1919 - val_loss: 1.5971 - val_accuracy: 0.3048
Epoch 2/5
1719/1719 [==============================] - 8s 5ms/step - loss: 1.2654 - accuracy: 0.4591 - val_loss: 0.9156 - val_accuracy: 0.6372
Epoch 3/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.9312 - accuracy: 0.6169 - val_loss: 0.8928 - val_accuracy: 0.6246
Epoch 4/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.8188 - accuracy: 0.6710 - val_loss: 0.6914 - val_accuracy: 0.7396
Epoch 5/5
1719/1719 [==============================] - 8s 5ms/step - loss: 0.7288 - accuracy: 0.7152 - val_loss: 0.6638 - val_accuracy: 0.7380
###Markdown
Not great at all, we suffered from the vanishing/exploding gradients problem. Batch Normalization
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(100, activation="relu"),
keras.layers.BatchNormalization(),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
bn1 = model.layers[1]
[(var.name, var.trainable) for var in bn1.variables]
#bn1.updates #deprecated
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.2287 - accuracy: 0.5993 - val_loss: 0.5526 - val_accuracy: 0.8230
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5996 - accuracy: 0.7959 - val_loss: 0.4725 - val_accuracy: 0.8468
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5312 - accuracy: 0.8168 - val_loss: 0.4375 - val_accuracy: 0.8558
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4884 - accuracy: 0.8294 - val_loss: 0.4153 - val_accuracy: 0.8596
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4717 - accuracy: 0.8343 - val_loss: 0.3997 - val_accuracy: 0.8640
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4420 - accuracy: 0.8461 - val_loss: 0.3867 - val_accuracy: 0.8694
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4285 - accuracy: 0.8496 - val_loss: 0.3763 - val_accuracy: 0.8710
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4086 - accuracy: 0.8552 - val_loss: 0.3711 - val_accuracy: 0.8740
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4079 - accuracy: 0.8566 - val_loss: 0.3631 - val_accuracy: 0.8752
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3903 - accuracy: 0.8617 - val_loss: 0.3573 - val_accuracy: 0.8750
###Markdown
Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.BatchNormalization(),
keras.layers.Dense(300, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(100, use_bias=False),
keras.layers.BatchNormalization(),
keras.layers.Activation("relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=10,
validation_data=(X_valid, y_valid))
###Output
Epoch 1/10
1719/1719 [==============================] - 3s 1ms/step - loss: 1.3677 - accuracy: 0.5604 - val_loss: 0.6767 - val_accuracy: 0.7812
Epoch 2/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.7136 - accuracy: 0.7702 - val_loss: 0.5566 - val_accuracy: 0.8184
Epoch 3/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.6123 - accuracy: 0.7990 - val_loss: 0.5007 - val_accuracy: 0.8360
Epoch 4/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5547 - accuracy: 0.8148 - val_loss: 0.4666 - val_accuracy: 0.8448
Epoch 5/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5255 - accuracy: 0.8230 - val_loss: 0.4434 - val_accuracy: 0.8534
Epoch 6/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4947 - accuracy: 0.8328 - val_loss: 0.4263 - val_accuracy: 0.8550
Epoch 7/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4736 - accuracy: 0.8385 - val_loss: 0.4130 - val_accuracy: 0.8566
Epoch 8/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4550 - accuracy: 0.8446 - val_loss: 0.4035 - val_accuracy: 0.8612
Epoch 9/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4495 - accuracy: 0.8440 - val_loss: 0.3943 - val_accuracy: 0.8638
Epoch 10/10
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4333 - accuracy: 0.8494 - val_loss: 0.3875 - val_accuracy: 0.8660
###Markdown
Gradient Clipping All Keras optimizers accept `clipnorm` or `clipvalue` arguments:
###Code
optimizer = keras.optimizers.SGD(clipvalue=1.0)
optimizer = keras.optimizers.SGD(clipnorm=1.0)
###Output
_____no_output_____
###Markdown
Reusing Pretrained Layers Reusing a Keras model Let's split the fashion MNIST training set in two:* `X_train_A`: all images of all items except for sandals and shirts (classes 5 and 6).* `X_train_B`: a much smaller training set of just the first 200 images of sandals or shirts.The validation set and the test set are also split this way, but without restricting the number of images.We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter).
###Code
def split_dataset(X, y):
y_5_or_6 = (y == 5) | (y == 6) # sandals or shirts
y_A = y[~y_5_or_6]
y_A[y_A > 6] -= 2 # class indices 7, 8, 9 should be moved to 5, 6, 7
y_B = (y[y_5_or_6] == 6).astype(np.float32) # binary classification task: is it a shirt (class 6)?
return ((X[~y_5_or_6], y_A),
(X[y_5_or_6], y_B))
(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)
(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)
(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)
X_train_B = X_train_B[:200]
y_train_B = y_train_B[:200]
X_train_A.shape
X_train_B.shape
y_train_A[:30]
y_train_B[:30]
tf.random.set_seed(42)
np.random.seed(42)
model_A = keras.models.Sequential()
model_A.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_A.add(keras.layers.Dense(n_hidden, activation="selu"))
model_A.add(keras.layers.Dense(8, activation="softmax"))
model_A.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_A.fit(X_train_A, y_train_A, epochs=20,
validation_data=(X_valid_A, y_valid_A))
model_A.save("my_model_A.h5")
model_B = keras.models.Sequential()
model_B.add(keras.layers.Flatten(input_shape=[28, 28]))
for n_hidden in (300, 100, 50, 50, 50):
model_B.add(keras.layers.Dense(n_hidden, activation="selu"))
model_B.add(keras.layers.Dense(1, activation="sigmoid"))
model_B.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B.fit(X_train_B, y_train_B, epochs=20,
validation_data=(X_valid_B, y_valid_B))
model_B.summary()
model_A = keras.models.load_model("my_model_A.h5")
model_B_on_A = keras.models.Sequential(model_A.layers[:-1])
model_B_on_A.add(keras.layers.Dense(1, activation="sigmoid"))
model_A_clone = keras.models.clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
model_B_on_A.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
###Output
Epoch 1/4
7/7 [==============================] - 1s 83ms/step - loss: 0.6155 - accuracy: 0.6184 - val_loss: 0.5843 - val_accuracy: 0.6329
Epoch 2/4
7/7 [==============================] - 0s 9ms/step - loss: 0.5550 - accuracy: 0.6638 - val_loss: 0.5467 - val_accuracy: 0.6805
Epoch 3/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4897 - accuracy: 0.7482 - val_loss: 0.5146 - val_accuracy: 0.7089
Epoch 4/4
7/7 [==============================] - 0s 8ms/step - loss: 0.4899 - accuracy: 0.7405 - val_loss: 0.4859 - val_accuracy: 0.7323
Epoch 1/16
7/7 [==============================] - 0s 28ms/step - loss: 0.4380 - accuracy: 0.7774 - val_loss: 0.3460 - val_accuracy: 0.8661
Epoch 2/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2971 - accuracy: 0.9143 - val_loss: 0.2603 - val_accuracy: 0.9310
Epoch 3/16
7/7 [==============================] - 0s 9ms/step - loss: 0.2034 - accuracy: 0.9777 - val_loss: 0.2110 - val_accuracy: 0.9554
Epoch 4/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1754 - accuracy: 0.9719 - val_loss: 0.1790 - val_accuracy: 0.9696
Epoch 5/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1348 - accuracy: 0.9809 - val_loss: 0.1561 - val_accuracy: 0.9757
Epoch 6/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1172 - accuracy: 0.9973 - val_loss: 0.1392 - val_accuracy: 0.9797
Epoch 7/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1137 - accuracy: 0.9931 - val_loss: 0.1266 - val_accuracy: 0.9838
Epoch 8/16
7/7 [==============================] - 0s 9ms/step - loss: 0.1000 - accuracy: 0.9931 - val_loss: 0.1163 - val_accuracy: 0.9858
Epoch 9/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0834 - accuracy: 1.0000 - val_loss: 0.1065 - val_accuracy: 0.9888
Epoch 10/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0775 - accuracy: 1.0000 - val_loss: 0.0999 - val_accuracy: 0.9899
Epoch 11/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0689 - accuracy: 1.0000 - val_loss: 0.0939 - val_accuracy: 0.9899
Epoch 12/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0719 - accuracy: 1.0000 - val_loss: 0.0888 - val_accuracy: 0.9899
Epoch 13/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0565 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9899
Epoch 14/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0494 - accuracy: 1.0000 - val_loss: 0.0802 - val_accuracy: 0.9899
Epoch 15/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0544 - accuracy: 1.0000 - val_loss: 0.0768 - val_accuracy: 0.9899
Epoch 16/16
7/7 [==============================] - 0s 9ms/step - loss: 0.0472 - accuracy: 1.0000 - val_loss: 0.0738 - val_accuracy: 0.9899
###Markdown
So, what's the final verdict?
###Code
model_B.evaluate(X_test_B, y_test_B)
model_B_on_A.evaluate(X_test_B, y_test_B)
###Output
63/63 [==============================] - 0s 705us/step - loss: 0.0682 - accuracy: 0.9935
###Markdown
Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!
###Code
(100 - 97.05) / (100 - 99.35)
###Output
_____no_output_____
###Markdown
Faster Optimizers Momentum optimization
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
Nesterov Accelerated Gradient
###Code
optimizer = keras.optimizers.SGD(lr=0.001, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
AdaGrad
###Code
optimizer = keras.optimizers.Adagrad(lr=0.001)
###Output
_____no_output_____
###Markdown
RMSProp
###Code
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9)
###Output
_____no_output_____
###Markdown
Adam Optimization
###Code
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Adamax Optimization
###Code
optimizer = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Nadam Optimization
###Code
optimizer = keras.optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999)
###Output
_____no_output_____
###Markdown
Learning Rate Scheduling Power Scheduling ```lr = lr0 / (1 + steps / s)**c```* Keras uses `c=1` and `s = 1 / decay`
###Code
optimizer = keras.optimizers.SGD(lr=0.01, decay=1e-4)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
learning_rate = 0.01
decay = 1e-4
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
epochs = np.arange(n_epochs)
lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)
plt.plot(epochs, lrs, "o-")
plt.axis([0, n_epochs - 1, 0, 0.01])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Power Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exponential Scheduling ```lr = lr0 * 0.1**(epoch / s)```
###Code
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1**(epoch / s)
return exponential_decay_fn
exponential_decay_fn = exponential_decay(lr0=0.01, s=20)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
lr_scheduler = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
The schedule function can take the current learning rate as a second argument:
###Code
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
###Output
_____no_output_____
###Markdown
If you want to update the learning rate at each iteration rather than at each epoch, you must write your own callback class:
###Code
K = keras.backend
class ExponentialDecay(keras.callbacks.Callback):
def __init__(self, s=40000):
super().__init__()
self.s = s
def on_batch_begin(self, batch, logs=None):
# Note: the `batch` argument is reset at each epoch
lr = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, lr * 0.1**(1 / s))
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
lr0 = 0.01
optimizer = keras.optimizers.Nadam(lr=lr0)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
exp_decay = ExponentialDecay(s)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[exp_decay])
n_steps = n_epochs * len(X_train) // 32
steps = np.arange(n_steps)
lrs = lr0 * 0.1**(steps / s)
plt.plot(steps, lrs, "-", linewidth=2)
plt.axis([0, n_steps - 1, 0, lr0 * 1.1])
plt.xlabel("Batch")
plt.ylabel("Learning Rate")
plt.title("Exponential Scheduling (per batch)", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Piecewise Constant Scheduling
###Code
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
def piecewise_constant(boundaries, values):
boundaries = np.array([0] + boundaries)
values = np.array(values)
def piecewise_constant_fn(epoch):
return values[np.argmax(boundaries > epoch) - 1]
return piecewise_constant_fn
piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])
lr_scheduler = keras.callbacks.LearningRateScheduler(piecewise_constant_fn)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, [piecewise_constant_fn(epoch) for epoch in history.epoch], "o-")
plt.axis([0, n_epochs - 1, 0, 0.011])
plt.xlabel("Epoch")
plt.ylabel("Learning Rate")
plt.title("Piecewise Constant Scheduling", fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Performance Scheduling
###Code
tf.random.set_seed(42)
np.random.seed(42)
lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum=0.9)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbacks=[lr_scheduler])
plt.plot(history.epoch, history.history["lr"], "bo-")
plt.xlabel("Epoch")
plt.ylabel("Learning Rate", color='b')
plt.tick_params('y', colors='b')
plt.gca().set_xlim(0, n_epochs - 1)
plt.grid(True)
ax2 = plt.gca().twinx()
ax2.plot(history.epoch, history.history["val_loss"], "r^-")
ax2.set_ylabel('Validation Loss', color='r')
ax2.tick_params('y', colors='r')
plt.title("Reduce LR on Plateau", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
tf.keras schedulers
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
s = 20 * len(X_train) // 32 # number of steps in 20 epochs (batch size = 32)
learning_rate = keras.optimizers.schedules.ExponentialDecay(0.01, s, 0.1)
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 25
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.5995 - accuracy: 0.7923 - val_loss: 0.4095 - val_accuracy: 0.8606
Epoch 2/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3890 - accuracy: 0.8613 - val_loss: 0.3738 - val_accuracy: 0.8692
Epoch 3/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3530 - accuracy: 0.8772 - val_loss: 0.3735 - val_accuracy: 0.8692
Epoch 4/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3296 - accuracy: 0.8813 - val_loss: 0.3494 - val_accuracy: 0.8798
Epoch 5/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3178 - accuracy: 0.8867 - val_loss: 0.3430 - val_accuracy: 0.8794
Epoch 6/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2930 - accuracy: 0.8951 - val_loss: 0.3414 - val_accuracy: 0.8826
Epoch 7/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2854 - accuracy: 0.8985 - val_loss: 0.3354 - val_accuracy: 0.8810
Epoch 8/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9039 - val_loss: 0.3364 - val_accuracy: 0.8824
Epoch 9/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2714 - accuracy: 0.9047 - val_loss: 0.3265 - val_accuracy: 0.8846
Epoch 10/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2570 - accuracy: 0.9084 - val_loss: 0.3238 - val_accuracy: 0.8854
Epoch 11/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2502 - accuracy: 0.9117 - val_loss: 0.3250 - val_accuracy: 0.8862
Epoch 12/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2453 - accuracy: 0.9145 - val_loss: 0.3299 - val_accuracy: 0.8830
Epoch 13/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2408 - accuracy: 0.9154 - val_loss: 0.3219 - val_accuracy: 0.8870
Epoch 14/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2380 - accuracy: 0.9154 - val_loss: 0.3221 - val_accuracy: 0.8860
Epoch 15/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2378 - accuracy: 0.9166 - val_loss: 0.3208 - val_accuracy: 0.8864
Epoch 16/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2318 - accuracy: 0.9191 - val_loss: 0.3184 - val_accuracy: 0.8892
Epoch 17/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2266 - accuracy: 0.9212 - val_loss: 0.3197 - val_accuracy: 0.8906
Epoch 18/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2284 - accuracy: 0.9185 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 19/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2286 - accuracy: 0.9205 - val_loss: 0.3197 - val_accuracy: 0.8884
Epoch 20/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2288 - accuracy: 0.9211 - val_loss: 0.3169 - val_accuracy: 0.8906
Epoch 21/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2265 - accuracy: 0.9212 - val_loss: 0.3179 - val_accuracy: 0.8904
Epoch 22/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2258 - accuracy: 0.9205 - val_loss: 0.3163 - val_accuracy: 0.8914
Epoch 23/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9226 - val_loss: 0.3170 - val_accuracy: 0.8904
Epoch 24/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2182 - accuracy: 0.9244 - val_loss: 0.3165 - val_accuracy: 0.8898
Epoch 25/25
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2224 - accuracy: 0.9229 - val_loss: 0.3164 - val_accuracy: 0.8904
###Markdown
For piecewise constant scheduling, try this:
###Code
learning_rate = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],
values=[0.01, 0.005, 0.001])
###Output
_____no_output_____
###Markdown
1Cycle scheduling
###Code
K = keras.backend
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (self.iteration - iter1)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
K.set_value(self.model.optimizer.lr, rate)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/25
430/430 [==============================] - 1s 2ms/step - loss: 0.6572 - accuracy: 0.7740 - val_loss: 0.4872 - val_accuracy: 0.8338
Epoch 2/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4580 - accuracy: 0.8397 - val_loss: 0.4274 - val_accuracy: 0.8520
Epoch 3/25
430/430 [==============================] - 1s 2ms/step - loss: 0.4121 - accuracy: 0.8545 - val_loss: 0.4116 - val_accuracy: 0.8588
Epoch 4/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3837 - accuracy: 0.8642 - val_loss: 0.3868 - val_accuracy: 0.8688
Epoch 5/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3639 - accuracy: 0.8719 - val_loss: 0.3766 - val_accuracy: 0.8688
Epoch 6/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3456 - accuracy: 0.8775 - val_loss: 0.3739 - val_accuracy: 0.8706
Epoch 7/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3330 - accuracy: 0.8811 - val_loss: 0.3635 - val_accuracy: 0.8708
Epoch 8/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3184 - accuracy: 0.8861 - val_loss: 0.3959 - val_accuracy: 0.8610
Epoch 9/25
430/430 [==============================] - 1s 2ms/step - loss: 0.3065 - accuracy: 0.8890 - val_loss: 0.3475 - val_accuracy: 0.8770
Epoch 10/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2943 - accuracy: 0.8927 - val_loss: 0.3392 - val_accuracy: 0.8806
Epoch 11/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2838 - accuracy: 0.8963 - val_loss: 0.3467 - val_accuracy: 0.8800
Epoch 12/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2707 - accuracy: 0.9024 - val_loss: 0.3646 - val_accuracy: 0.8696
Epoch 13/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2536 - accuracy: 0.9079 - val_loss: 0.3350 - val_accuracy: 0.8842
Epoch 14/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2405 - accuracy: 0.9135 - val_loss: 0.3465 - val_accuracy: 0.8794
Epoch 15/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2279 - accuracy: 0.9185 - val_loss: 0.3257 - val_accuracy: 0.8830
Epoch 16/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2159 - accuracy: 0.9232 - val_loss: 0.3294 - val_accuracy: 0.8824
Epoch 17/25
430/430 [==============================] - 1s 2ms/step - loss: 0.2062 - accuracy: 0.9263 - val_loss: 0.3333 - val_accuracy: 0.8882
Epoch 18/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1978 - accuracy: 0.9301 - val_loss: 0.3235 - val_accuracy: 0.8898
Epoch 19/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1892 - accuracy: 0.9337 - val_loss: 0.3233 - val_accuracy: 0.8906
Epoch 20/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1821 - accuracy: 0.9365 - val_loss: 0.3224 - val_accuracy: 0.8928
Epoch 21/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1752 - accuracy: 0.9400 - val_loss: 0.3220 - val_accuracy: 0.8908
Epoch 22/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1700 - accuracy: 0.9416 - val_loss: 0.3180 - val_accuracy: 0.8962
Epoch 23/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1655 - accuracy: 0.9438 - val_loss: 0.3187 - val_accuracy: 0.8940
Epoch 24/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3177 - val_accuracy: 0.8932
Epoch 25/25
430/430 [==============================] - 1s 2ms/step - loss: 0.1610 - accuracy: 0.9462 - val_loss: 0.3170 - val_accuracy: 0.8934
###Markdown
Avoiding Overfitting Through Regularization $\ell_1$ and $\ell_2$ regularization
###Code
layer = keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
# or l1(0.1) for ℓ1 regularization with a factor or 0.1
# or l1_l2(0.1, 0.01) for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(100, activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01)),
keras.layers.Dense(10, activation="softmax",
kernel_regularizer=keras.regularizers.l2(0.01))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
from functools import partial
RegularizedDense = partial(keras.layers.Dense,
activation="elu",
kernel_initializer="he_normal",
kernel_regularizer=keras.regularizers.l2(0.01))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 3.2911 - accuracy: 0.7924 - val_loss: 0.7218 - val_accuracy: 0.8310
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7282 - accuracy: 0.8245 - val_loss: 0.6826 - val_accuracy: 0.8382
###Markdown
Dropout
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(100, activation="elu", kernel_initializer="he_normal"),
keras.layers.Dropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 6s 3ms/step - loss: 0.7611 - accuracy: 0.7576 - val_loss: 0.3730 - val_accuracy: 0.8644
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4306 - accuracy: 0.8401 - val_loss: 0.3395 - val_accuracy: 0.8722
###Markdown
Alpha Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(300, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal"),
keras.layers.AlphaDropout(rate=0.2),
keras.layers.Dense(10, activation="softmax")
])
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
n_epochs = 20
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.evaluate(X_train_scaled, y_train)
history = model.fit(X_train_scaled, y_train)
###Output
1719/1719 [==============================] - 2s 1ms/step - loss: 0.4225 - accuracy: 0.8432
###Markdown
MC Dropout
###Code
tf.random.set_seed(42)
np.random.seed(42)
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
np.round(model.predict(X_test_scaled[:1]), 2)
np.round(y_probas[:, :1], 2)
np.round(y_proba[:1], 2)
y_std = y_probas.std(axis=0)
np.round(y_std[:1], 2)
y_pred = np.argmax(y_proba, axis=1)
accuracy = np.sum(y_pred == y_test) / len(y_test)
accuracy
class MCDropout(keras.layers.Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
tf.random.set_seed(42)
np.random.seed(42)
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
mc_model.summary()
optimizer = keras.optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
mc_model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"])
mc_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Now we can use the model with MC Dropout:
###Code
np.round(np.mean([mc_model.predict(X_test_scaled[:1]) for sample in range(100)], axis=0), 2)
###Output
_____no_output_____
###Markdown
Max norm
###Code
layer = keras.layers.Dense(100, activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
MaxNormDense = partial(keras.layers.Dense,
activation="selu", kernel_initializer="lecun_normal",
kernel_constraint=keras.constraints.max_norm(1.))
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
MaxNormDense(300),
MaxNormDense(100),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam", metrics=["accuracy"])
n_epochs = 2
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid))
###Output
Epoch 1/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.5763 - accuracy: 0.8020 - val_loss: 0.3674 - val_accuracy: 0.8674
Epoch 2/2
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3545 - accuracy: 0.8709 - val_loss: 0.3714 - val_accuracy: 0.8662
###Markdown
Exercises 1. to 7. See appendix A. 8. Deep Learning on CIFAR10 a.*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the ELU activation function.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
activation="elu",
kernel_initializer="he_normal"))
###Output
_____no_output_____
###Markdown
b.*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.* Let's add the output layer to the model:
###Code
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better.
###Code
optimizer = keras.optimizers.Nadam(lr=5e-5)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:
###Code
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.cifar10.load_data()
X_train = X_train_full[5000:]
y_train = y_train_full[5000:]
X_valid = X_train_full[:5000]
y_valid = y_train_full[:5000]
###Output
_____no_output_____
###Markdown
Now we can create the callbacks we need and train the model:
###Code
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
%tensorboard --logdir=./my_cifar10_logs --port=6006
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_model.h5")
model.evaluate(X_valid, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4960 - accuracy: 0.4762
###Markdown
The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization. c.*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?* The code below is very similar to the code above, with a few changes:* I added a BN layer after every Dense layer (before the activation function), except for the output layer. I also added a BN layer before the first hidden layer.* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.* I renamed the run directories to run_bn_* and the model file name to my_cifar10_bn_model.h5.
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
for _ in range(20):
model.add(keras.layers.Dense(100, kernel_initializer="he_normal"))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Activation("elu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_bn_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_bn_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_bn_model.h5")
model.evaluate(X_valid, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 19s 9ms/step - loss: 1.9765 - accuracy: 0.2968 - val_loss: 1.6602 - val_accuracy: 0.4042
Epoch 2/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6787 - accuracy: 0.4056 - val_loss: 1.5887 - val_accuracy: 0.4304
Epoch 3/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.6097 - accuracy: 0.4274 - val_loss: 1.5781 - val_accuracy: 0.4326
Epoch 4/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5574 - accuracy: 0.4486 - val_loss: 1.5064 - val_accuracy: 0.4676
Epoch 5/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.5075 - accuracy: 0.4642 - val_loss: 1.4412 - val_accuracy: 0.4844
Epoch 6/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4664 - accuracy: 0.4787 - val_loss: 1.4179 - val_accuracy: 0.4984
Epoch 7/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.4334 - accuracy: 0.4932 - val_loss: 1.4277 - val_accuracy: 0.4906
Epoch 8/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.4054 - accuracy: 0.5038 - val_loss: 1.3843 - val_accuracy: 0.5130
Epoch 9/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3816 - accuracy: 0.5106 - val_loss: 1.3691 - val_accuracy: 0.5108
Epoch 10/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3547 - accuracy: 0.5206 - val_loss: 1.3552 - val_accuracy: 0.5226
Epoch 11/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.3244 - accuracy: 0.5371 - val_loss: 1.3678 - val_accuracy: 0.5142
Epoch 12/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.3078 - accuracy: 0.5393 - val_loss: 1.3844 - val_accuracy: 0.5080
Epoch 13/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2889 - accuracy: 0.5431 - val_loss: 1.3566 - val_accuracy: 0.5164
Epoch 14/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.2607 - accuracy: 0.5559 - val_loss: 1.3626 - val_accuracy: 0.5248
Epoch 15/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2580 - accuracy: 0.5587 - val_loss: 1.3616 - val_accuracy: 0.5276
Epoch 16/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2441 - accuracy: 0.5586 - val_loss: 1.3350 - val_accuracy: 0.5286
Epoch 17/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.2241 - accuracy: 0.5676 - val_loss: 1.3370 - val_accuracy: 0.5408
Epoch 18/100
<<29 more lines>>
Epoch 33/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0336 - accuracy: 0.6369 - val_loss: 1.3682 - val_accuracy: 0.5450
Epoch 34/100
1407/1407 [==============================] - 11s 8ms/step - loss: 1.0228 - accuracy: 0.6388 - val_loss: 1.3348 - val_accuracy: 0.5458
Epoch 35/100
1407/1407 [==============================] - 12s 8ms/step - loss: 1.0205 - accuracy: 0.6407 - val_loss: 1.3490 - val_accuracy: 0.5440
Epoch 36/100
1407/1407 [==============================] - 12s 9ms/step - loss: 1.0008 - accuracy: 0.6489 - val_loss: 1.3568 - val_accuracy: 0.5408
Epoch 37/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9785 - accuracy: 0.6543 - val_loss: 1.3628 - val_accuracy: 0.5396
Epoch 38/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9832 - accuracy: 0.6592 - val_loss: 1.3617 - val_accuracy: 0.5482
Epoch 39/100
1407/1407 [==============================] - 12s 8ms/step - loss: 0.9707 - accuracy: 0.6581 - val_loss: 1.3767 - val_accuracy: 0.5446
Epoch 40/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9590 - accuracy: 0.6651 - val_loss: 1.4200 - val_accuracy: 0.5314
Epoch 41/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9548 - accuracy: 0.6668 - val_loss: 1.3692 - val_accuracy: 0.5450
Epoch 42/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9480 - accuracy: 0.6667 - val_loss: 1.3841 - val_accuracy: 0.5310
Epoch 43/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9411 - accuracy: 0.6716 - val_loss: 1.4036 - val_accuracy: 0.5382
Epoch 44/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9383 - accuracy: 0.6708 - val_loss: 1.4114 - val_accuracy: 0.5236
Epoch 45/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9258 - accuracy: 0.6769 - val_loss: 1.4224 - val_accuracy: 0.5324
Epoch 46/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.9072 - accuracy: 0.6836 - val_loss: 1.3875 - val_accuracy: 0.5442
Epoch 47/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8996 - accuracy: 0.6850 - val_loss: 1.4449 - val_accuracy: 0.5280
Epoch 48/100
1407/1407 [==============================] - 13s 9ms/step - loss: 0.9050 - accuracy: 0.6835 - val_loss: 1.4167 - val_accuracy: 0.5338
Epoch 49/100
1407/1407 [==============================] - 12s 9ms/step - loss: 0.8934 - accuracy: 0.6880 - val_loss: 1.4260 - val_accuracy: 0.5294
157/157 [==============================] - 1s 2ms/step - loss: 1.3344 - accuracy: 0.5398
###Markdown
* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly! d.*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=7e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_selu_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_selu_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
model = keras.models.load_model("my_cifar10_selu_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
157/157 [==============================] - 0s 1ms/step - loss: 1.4633 - accuracy: 0.4792
###Markdown
We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far. e.*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.Nadam(lr=5e-4)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
model_checkpoint_cb = keras.callbacks.ModelCheckpoint("my_cifar10_alpha_dropout_model.h5", save_best_only=True)
run_index = 1 # increment every time you train the model
run_logdir = os.path.join(os.curdir, "my_cifar10_logs", "run_alpha_dropout_{:03d}".format(run_index))
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]
X_means = X_train.mean(axis=0)
X_stds = X_train.std(axis=0)
X_train_scaled = (X_train - X_means) / X_stds
X_valid_scaled = (X_valid - X_means) / X_stds
X_test_scaled = (X_test - X_means) / X_stds
model.fit(X_train_scaled, y_train, epochs=100,
validation_data=(X_valid_scaled, y_valid),
callbacks=callbacks)
model = keras.models.load_model("my_cifar10_alpha_dropout_model.h5")
model.evaluate(X_valid_scaled, y_valid)
###Output
Epoch 1/100
1407/1407 [==============================] - 9s 5ms/step - loss: 2.0583 - accuracy: 0.2742 - val_loss: 1.7429 - val_accuracy: 0.3858
Epoch 2/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.6852 - accuracy: 0.4008 - val_loss: 1.7055 - val_accuracy: 0.3792
Epoch 3/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5963 - accuracy: 0.4413 - val_loss: 1.7401 - val_accuracy: 0.4072
Epoch 4/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.5231 - accuracy: 0.4634 - val_loss: 1.5728 - val_accuracy: 0.4584
Epoch 5/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.4619 - accuracy: 0.4887 - val_loss: 1.5448 - val_accuracy: 0.4702
Epoch 6/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.4074 - accuracy: 0.5061 - val_loss: 1.5678 - val_accuracy: 0.4664
Epoch 7/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.3718 - accuracy: 0.5222 - val_loss: 1.5764 - val_accuracy: 0.4824
Epoch 8/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.3220 - accuracy: 0.5387 - val_loss: 1.4805 - val_accuracy: 0.4890
Epoch 9/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2908 - accuracy: 0.5487 - val_loss: 1.5521 - val_accuracy: 0.4638
Epoch 10/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.2537 - accuracy: 0.5607 - val_loss: 1.5281 - val_accuracy: 0.4924
Epoch 11/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.2215 - accuracy: 0.5782 - val_loss: 1.5147 - val_accuracy: 0.5046
Epoch 12/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.1910 - accuracy: 0.5831 - val_loss: 1.5248 - val_accuracy: 0.5002
Epoch 13/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1659 - accuracy: 0.5982 - val_loss: 1.5620 - val_accuracy: 0.5066
Epoch 14/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1282 - accuracy: 0.6120 - val_loss: 1.5440 - val_accuracy: 0.5180
Epoch 15/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.1127 - accuracy: 0.6133 - val_loss: 1.5782 - val_accuracy: 0.5146
Epoch 16/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0917 - accuracy: 0.6266 - val_loss: 1.6182 - val_accuracy: 0.5182
Epoch 17/100
1407/1407 [==============================] - 6s 5ms/step - loss: 1.0620 - accuracy: 0.6331 - val_loss: 1.6285 - val_accuracy: 0.5126
Epoch 18/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0433 - accuracy: 0.6413 - val_loss: 1.6299 - val_accuracy: 0.5158
Epoch 19/100
1407/1407 [==============================] - 7s 5ms/step - loss: 1.0087 - accuracy: 0.6549 - val_loss: 1.7172 - val_accuracy: 0.5062
Epoch 20/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9950 - accuracy: 0.6571 - val_loss: 1.6524 - val_accuracy: 0.5098
Epoch 21/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9848 - accuracy: 0.6652 - val_loss: 1.7686 - val_accuracy: 0.5038
Epoch 22/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9597 - accuracy: 0.6744 - val_loss: 1.6177 - val_accuracy: 0.5084
Epoch 23/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9399 - accuracy: 0.6790 - val_loss: 1.7095 - val_accuracy: 0.5082
Epoch 24/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.9148 - accuracy: 0.6884 - val_loss: 1.7160 - val_accuracy: 0.5150
Epoch 25/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.9023 - accuracy: 0.6949 - val_loss: 1.7017 - val_accuracy: 0.5152
Epoch 26/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8732 - accuracy: 0.7031 - val_loss: 1.7274 - val_accuracy: 0.5088
Epoch 27/100
1407/1407 [==============================] - 6s 5ms/step - loss: 0.8542 - accuracy: 0.7091 - val_loss: 1.7648 - val_accuracy: 0.5166
Epoch 28/100
1407/1407 [==============================] - 7s 5ms/step - loss: 0.8499 - accuracy: 0.7118 - val_loss: 1.7973 - val_accuracy: 0.5000
157/157 [==============================] - 0s 1ms/step - loss: 1.4805 - accuracy: 0.4890
###Markdown
The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case. Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:
###Code
class MCAlphaDropout(keras.layers.AlphaDropout):
def call(self, inputs):
return super().call(inputs, training=True)
###Output
_____no_output_____
###Markdown
Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:
###Code
mc_model = keras.models.Sequential([
MCAlphaDropout(layer.rate) if isinstance(layer, keras.layers.AlphaDropout) else layer
for layer in model.layers
])
###Output
_____no_output_____
###Markdown
Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:
###Code
def mc_dropout_predict_probas(mc_model, X, n_samples=10):
Y_probas = [mc_model.predict(X) for sample in range(n_samples)]
return np.mean(Y_probas, axis=0)
def mc_dropout_predict_classes(mc_model, X, n_samples=10):
Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)
return np.argmax(Y_probas, axis=1)
###Output
_____no_output_____
###Markdown
Now let's make predictions for all the instances in the validation set, and compute the accuracy:
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)
accuracy = np.mean(y_pred == y_valid[:, 0])
accuracy
###Output
_____no_output_____
###Markdown
We get no accuracy improvement in this case (we're still at 48.9% accuracy).So the best model we got in this exercise is the Batch Normalization model. f.*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*
###Code
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-3)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
batch_size = 128
rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 1.4])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
for _ in range(20):
model.add(keras.layers.Dense(100,
kernel_initializer="lecun_normal",
activation="selu"))
model.add(keras.layers.AlphaDropout(rate=0.1))
model.add(keras.layers.Dense(10, activation="softmax"))
optimizer = keras.optimizers.SGD(lr=1e-2)
model.compile(loss="sparse_categorical_crossentropy",
optimizer=optimizer,
metrics=["accuracy"])
n_epochs = 15
onecycle = OneCycleScheduler(len(X_train_scaled) // batch_size * n_epochs, max_rate=0.05)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_valid_scaled, y_valid),
callbacks=[onecycle])
###Output
Epoch 1/15
352/352 [==============================] - 3s 6ms/step - loss: 2.2298 - accuracy: 0.2349 - val_loss: 1.7841 - val_accuracy: 0.3834
Epoch 2/15
352/352 [==============================] - 2s 6ms/step - loss: 1.7928 - accuracy: 0.3689 - val_loss: 1.6806 - val_accuracy: 0.4086
Epoch 3/15
352/352 [==============================] - 2s 6ms/step - loss: 1.6475 - accuracy: 0.4190 - val_loss: 1.6378 - val_accuracy: 0.4350
Epoch 4/15
352/352 [==============================] - 2s 6ms/step - loss: 1.5428 - accuracy: 0.4543 - val_loss: 1.6266 - val_accuracy: 0.4390
Epoch 5/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4865 - accuracy: 0.4769 - val_loss: 1.6158 - val_accuracy: 0.4384
Epoch 6/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4339 - accuracy: 0.4866 - val_loss: 1.5850 - val_accuracy: 0.4412
Epoch 7/15
352/352 [==============================] - 2s 6ms/step - loss: 1.4042 - accuracy: 0.5056 - val_loss: 1.6146 - val_accuracy: 0.4384
Epoch 8/15
352/352 [==============================] - 2s 6ms/step - loss: 1.3437 - accuracy: 0.5229 - val_loss: 1.5299 - val_accuracy: 0.4846
Epoch 9/15
352/352 [==============================] - 2s 5ms/step - loss: 1.2721 - accuracy: 0.5459 - val_loss: 1.5145 - val_accuracy: 0.4874
Epoch 10/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1942 - accuracy: 0.5698 - val_loss: 1.4958 - val_accuracy: 0.5040
Epoch 11/15
352/352 [==============================] - 2s 6ms/step - loss: 1.1211 - accuracy: 0.6033 - val_loss: 1.5406 - val_accuracy: 0.4984
Epoch 12/15
352/352 [==============================] - 2s 6ms/step - loss: 1.0673 - accuracy: 0.6161 - val_loss: 1.5284 - val_accuracy: 0.5144
Epoch 13/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9927 - accuracy: 0.6435 - val_loss: 1.5449 - val_accuracy: 0.5140
Epoch 14/15
352/352 [==============================] - 2s 6ms/step - loss: 0.9205 - accuracy: 0.6703 - val_loss: 1.5652 - val_accuracy: 0.5224
Epoch 15/15
352/352 [==============================] - 2s 6ms/step - loss: 0.8936 - accuracy: 0.6801 - val_loss: 1.5912 - val_accuracy: 0.5198
|
Boston_Housing_Prices.ipynb | ###Markdown
Regression in Sklearn:We have to drop the price column (defined as testing set y) and all other features will be our training set (X)
###Code
from sklearn.linear_model import LinearRegression
X = boston_df.drop('Price',axis = 1)
#Linear regression object
lm = LinearRegression()
lm.fit(X, boston_df.Price)
print 'Estimated coefficients', lm.coef_
print 'Estimated intercept', lm.intercept_
coef_table = pd.DataFrame(zip(X.columns, lm.coef_), columns = ['features', 'Estimated_coefficients'])
coef_table
plt.scatter(boston_df.RM, boston_df.Price)
#plt.xlim(0, 20)
#plt.ylim(0,80)
plt.xlabel('Average number of rooms per dwelling')
plt.ylabel('Housing Price')
plt.title('Relationship between RM and Housing Price')
plt.show()
lm.predict(X)[0:5]
boston_df['Predicted_Price'] = lm.predict(X)
boston_df['Error'] = boston_df.Price - boston_df.Predicted_Price
boston_df.head()
np.mean((boston_df.Error)**2)
import statsmodels.formula.api as sm
result = sm.OLS( boston_df.Price, X ).fit()
result.summary()
X.columns.values
plt.scatter(boston_df.Price,boston_df.Predicted_Price)
plt.xlabel('Real Prices $Y_{i}$')
plt.ylabel('Predicted Prices $\hat{Y}_{i}$')
plt.title('Predicted Prices vs Real Prices')
S2 = np.mean((boston_df.Price - boston_df.Predicted_Price)**2)
print'Mean squared error (MSE): ',S2
X.shape
# Setting training and testing set manually
X_train = X[:-50]
X_test = X[-50:]
y_train = boston_df.Price[:-50]
y_test = boston_df.Price[-50:]
print X_train.shape,X_test.shape,y_train.shape,y_test.shape
# Setting training and testing set by sklearn
#from sklearn import cross_validation
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(X,
boston_df.Price, test_size = 0.3, random_state = 5)
print X_train.shape,X_test.shape,y_train.shape,y_test.shape
sk_lm = LinearRegression()
sk_lm.fit(X_train, y_train)
pred_train = sk_lm.predict(X_train)
pred_test = sk_lm.predict(X_test)
pred_train[0:5]
pred_test[0:5]
boston_df.Price[0:5]
MSE_training_set = np.mean((y_train - sk_lm.predict(X_train))**2)
MSE_testing_set = np.mean((y_test - sk_lm.predict(X_test))**2)
print 'MSE_training_set',MSE_training_set
print 'MSE_testing_set',MSE_testing_set
###Output
MSE_training_set 19.0715279659
MSE_testing_set 30.7032322072
###Markdown
Residual plotsResidual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If you see structure in your data, that means your model is not capturing some thing. Maye be there is a interaction between 2 variables that you are not considering, or may be you are measuring time dependent data. If you get some structure in your data, you should go back to your model and check whether you are doing a good job with your parameters.
###Code
fig = plt.figure(figsize=(10, 7))
plt.scatter(sk_lm.predict(X_train), y_train - sk_lm.predict(X_train), c = 'b', s = 40, alpha = 0.5)
plt.scatter(sk_lm.predict(X_test), y_test - sk_lm.predict(X_test), c = 'g', s = 40)
plt.hlines(y = 0, xmin = 0, xmax = 50)
plt.title('Residuals plot using training (blue) and testing (green) data')
plt.ylabel('Residuals')
np.mean((y_train - sk_lm.predict(X_train))**2)
#LOGISTIC REGRESSION and Classification Accuracy
feature_select = ['CRIM', 'ZN', 'INDUS']
X_select = boston_df[feature_select]
#X_select[X_select['CHAS']>0]
y_select = boston_df['CHAS']#HOUSE CLOSE TO A RIVER
X_train_select, X_test_select, y_train_select, y_test_select = sklearn.cross_validation.train_test_split(X_select,
y_select, test_size = 0.2, random_state = 1)
print X_train_select.shape,X_test_select.shape,y_train_select.shape,y_test_select.shape
from sklearn.linear_model import LogisticRegression
logistic_select = LogisticRegression()
logistic_select.fit(X_train_select, y_train_select)
Chas_predict = logistic_select.predict(X_test_select)
from sklearn import metrics
print 'Accuracy score', metrics.accuracy_score(y_test_select, Chas_predict)
y_test_select.value_counts()
np.mean(y_test_select)#percentage of houses close to a river
1-np.mean(y_test_select)#the model predict at 97% of the data the houses are far from a river(NULL ACCURACY)
#The null accuaracy :
max(np.mean(y_test_select), 1-np.mean(y_test_select))
#Comparaison of testing set and the predicted values(y train rarely used)
print 'True values',y_test_select.values[0:10]
print 'Pred values',Chas_predict[0:10]
###Output
True values [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
Pred values [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Importing Libraries
###Code
import pandas as pd;
from sklearn.datasets import load_boston;
import seaborn as sns;
from sklearn.preprocessing import StandardScaler;
import matplotlib.pyplot as plt;
from sklearn.model_selection import train_test_split;
from sklearn.linear_model import LinearRegression;
from sklearn.linear_model import Lasso;
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score;
from sklearn.preprocessing import PolynomialFeatures
###Output
_____no_output_____
###Markdown
Reading the Boston Housing Prices using the sklearn function
###Code
boston_dataset = load_boston()
df = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
df.head()
###Output
_____no_output_____
###Markdown
Since the boston dataset is pre-prepared in that it is already divided into features and targets, the MEDV column cannot be preloaded directly. Hence, we can manually add the MEDV column by calling the target
###Code
df['MEDV'] = boston_dataset.target
df
df.columns
df.describe()
###Output
_____no_output_____
###Markdown
Using the describe function we can see a lot of things here. Most importantly that there are a couple of outliers in the data. For instance if you compare the max with the mean of each column then you will see a big difference Univariate Analysis
###Code
sns.distplot(df['AGE'])
plt.title("Age Distribution")
###Output
/Users/adarshraghav/opt/anaconda3/lib/python3.8/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
As we can see in the distribution above most owners are above the age of 70. Although this doesnt give us much idea we can try and plot something against it. As you can see below I'm plot the median house value against age to find a relation
###Code
sns.relplot(data = df, x="AGE", y="MEDV")
###Output
_____no_output_____
###Markdown
The general trend we can see here is that Median Value of the house starts to decrease as Age increases
###Code
sns.relplot(data = df, x="AGE", y="TAX")
sns.relplot(data = df, x="MEDV", y="TAX")
###Output
_____no_output_____
###Markdown
As we can see here when median house value is low, Tax is higher. This is very odd as it cannot be possible unless some other factors are into play. We may have to do multivariate analysis to find the correlation in other variables
###Code
sns.relplot(x='MEDV', y='TAX', data=df, hue='AGE', size='AGE')
###Output
_____no_output_____
###Markdown
Another thing to be noted here is that most of the highest tax payers are people aged above 80 with some of the lowest Median Values.
###Code
sns.displot(df, x="NOX", col="CHAS")
###Output
_____no_output_____
###Markdown
Since CHAS was a binary data switching between 0 and 1, I tried to use that too. Here we can see that when CHAS=0 the NOX value is higher as compared to when CHAS=0.
###Code
fig, axs = plt.subplots(ncols=7, nrows=2, figsize=(20, 10))
index = 0
axs = axs.flatten()
for k,v in df.items():
sns.boxplot(y=k, data=df, ax=axs[index])
index += 1
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=5.0)
###Output
_____no_output_____
###Markdown
As we can see my hypothesis was correct that many attributes here have outliers Multi-variate Analysis
###Code
plt.figure(figsize=(20, 10))
sns.heatmap(df.corr().abs(), annot=True)
###Output
_____no_output_____
###Markdown
Here RAD and TAX attributes are highly correlated which mean we have to take out either of the one, as keeping both will not help, only giving rise to inaccuracy
###Code
df=df.drop(['RAD'], axis = 1)
df
###Output
_____no_output_____
###Markdown
Now we will scale down the feature values using the standard scaled
###Code
df2 = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
scaler = StandardScaler().fit(df2)
df2 = scaler.transform(df2)
df2
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
X = df2
y = boston_dataset.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=42)
reg=LinearRegression()
reg.fit(X_train, y_train)
y_train_pred = reg.predict(X_train)
y_test_pred = reg.predict(X_test)
rmse_train = mean_squared_error(y_train, y_train_pred)
rmse_test = mean_squared_error(y_test, y_test_pred)
print("RMSE after prediction on training data is: {}".format(rmse_train))
print("RMSE after prediction on testing data is: {}".format(rmse_test))
r2_train = r2_score(y_train, y_train_pred)
r2_test = r2_score(y_test, y_test_pred)
print("R2 Score after prediction on training data is: {}, or in percentage {}%".format(r2_train, r2_train*100))
print("R2 Score after prediction on testing data is: {}, or in percentage {}%".format(r2_test, r2_test*100))
###Output
R2 Score after prediction on training data is: 0.7434997532004697, or in percentage 74.34997532004697%
R2 Score after prediction on testing data is: 0.7112260057484924, or in percentage 71.12260057484924%
###Markdown
Polynomial Regression
###Code
from sklearn.preprocessing import PolynomialFeatures
def poly_reg(X,y,X2, deg):
poly_reg = PolynomialFeatures(degree=deg)
X_poly = poly_reg.fit_transform(X)
pol_reg = LinearRegression()
pol_reg.fit(X_poly, y)
return pol_reg.predict(poly_reg.fit_transform(X2))
deg = 1
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
deg = 3
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
deg = 6
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
###Output
--------------- For Polynomial with Degree = 6 ---------------
RMSE after prediction on testing data is: 29483.688159379035
RMSE after prediction on training data is: 5.417585903688617e-24
R2Score after prediction on testing data is: -394.6846502575523
R2Score after prediction on training data is: 1.0
###Markdown
For the 3 degrees we tested in this model (1, 3, 6) we got different results for the RMSE and R2 scores. These scores can now help us understand how the models performed: Polynomial Model with Degree=1 ------------------------------ This was a good generalisation as scores/errors of both training and testing data where equal. Polynomial Model with Degree=3 and 6 ------------------------------ Both these model were highly overfitted with them having great results on training data but very bad results on testing data. Regularization
###Code
def poly_reg(X,y,X2, deg):
poly_reg = PolynomialFeatures(degree=deg)
X_poly = poly_reg.fit_transform(X)
clf = Lasso(alpha=0.01, max_iter=10000)
clf.fit(X_poly, y)
return clf.predict(poly_reg.fit_transform(X2))
deg = 1
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
deg = 3
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
deg = 6
Y_Pred_poly_test = poly_reg(X_train, y_train, X_test, deg)
Y_Pred_poly_train = poly_reg(X_train, y_train, X_train, deg)
rmse_poly_train = mean_squared_error(y_train, Y_Pred_poly_train)
rmse_poly_test = mean_squared_error(y_test, Y_Pred_poly_test)
r2_poly_train = r2_score(y_train, Y_Pred_poly_train)
r2_poly_test = r2_score(y_test, Y_Pred_poly_test)
print("--------------- For Polynomial with Degree = {} ---------------".format(deg))
print("RMSE after prediction on testing data is: {}".format(rmse_poly_test))
print("RMSE after prediction on training data is: {}".format(rmse_poly_train))
print("R2Score after prediction on testing data is: {}".format(r2_poly_test))
print("R2Score after prediction on training data is: {}".format(r2_poly_train))
###Output
--------------- For Polynomial with Degree = 6 ---------------
RMSE after prediction on testing data is: 77.81543919224326
RMSE after prediction on training data is: 0.6819807546917896
R2Score after prediction on testing data is: -0.04431896969531257
R2Score after prediction on training data is: 0.9922410957606486
|
Texture Synthesis.ipynb | ###Markdown
The code is by Anastasia Opara (www.anastasiaopara.com)Provided for use under the MIT licenseBased on "Texture Synthesis with Non-parametric Sampling" paper by Alexei A. Efros and Thomas K. Leung
###Code
#DONT FORGET TO RUN THIS :)
from textureSynthesis import *
from makeGif import *
###Output
_____no_output_____
###Markdown
Here is the part that requires your input! :) So, what are all those parameters anyway? Glad you asked!* **exampleMapPath** - a string with a path to the example image that you want to generate more of!* **outputPath** - a path where you want your output image(s) to be saved to! (the algorithm will also create a txt file with your parameters, so you don't forget what your setting for each generation were ^^)* **outputSize** - the size of the generated image* **searchKernelSize** - is how 'far' each pixel is aware of neighbouring pixels. With bigger value you will capture more 'global' structures (but it will be slower)* **truncation** - once we have an X number of candidate colors sampled from the example map for a given pixel, we need to choose which one we go with. Truncation makes sure you don't pick too unlikely samples. Make sure to keep the value in [0,1) range, where 0 is no truncation at all, and 0.9 means you will keep only 10% best samples and choose from them* **attenuation** - it goes together with truncation! attenuation is a 2nd step and it makes sure you will prioritize higher probability samples (if you want to of course! you can turn it off by setting value to 1). Make sure to keep it in [1, inf). If you feel very experimental, you can set it <1 which, on the contrary, will prioritize lower likelihood samples! (haven't tried myself)* **snapshots** - will save an image per iteration (if False, only save the final image) - needed if you want to make a gif :)And...that's all! Have fun :)
###Code
#PUT YOUR PARAMETERS HERE
exampleMapPath = "imgs/2.jpg"
outputSize = [75,75]
outputPath = "out/1/"
searchKernelSize = 15
textureSynthesis(exampleMapPath, outputSize, searchKernelSize, outputPath, attenuation = 80, truncation = 0.8, snapshots = True)
###Output
_____no_output_____
###Markdown
*Make GIF!*If you chose 'snapshots = True' option, then you can convert the sequence of images into an animated GIF! * **frame_every_X_steps** - sometimes you want your GIF to not include *every* frame, this one allows you to skip X number of frames! (don't worry, it will always end up on the last frame to show the fully resolved image)* **repeat_ending** - specify how many frames the GIF will loop over the final resolved image
###Code
gifOutputPath = "out/outGif.gif"
makeGif(outputPath, gifOutputPath, frame_every_X_steps = 15, repeat_ending = 15)
###Output
_____no_output_____ |
GRIB/Demo.ipynb | ###Markdown
Working with archived MESAN and LANTMET dataThis should serve as a simple demonstration showing how archived MESAN and LANTMET data can be accessed and manipulated easily. We will load interpolated weather data filtered from archived GRIB files from SMHI and fetch archived observed data for a Lantmet station at Arvidsjaur. This will be used to plot both data and the difference between them. Both functions decribed in this demonstration can be found in the function collection METCOMP_utils.py Prerequisites:- GRIB2CSV.ipynb- In GRIB2CSV.ipynb, call GRIB_to_CSV() with inputs specified below: ````python points = [{'id': '24688', 'lat': 55.6689, 'lon': 13.1023}] start_date = datetime.date(2020, 9, 1) end_date = datetime.date(2020, 9, 7) GRIB_to_CSV(points, start_date, end_date) ```` Read MESAN filesStreamlined function to readably load several csv data into one dataframe. Select station id, data source and time interval. Returns dataframe containing all data for specifed station between start_date and end_date. This function can also be used to read both stored MESAN and LANTMET data. In this example, only MESAN will be loaded while LANTMET is fetched directly from API.- Deals with missing files/directories.- Returned dataframe is chronologically sorted.
###Code
import os
import datetime
import pandas as pd
# Combine data from all CSV files into a dataframe.
# @params stationId: station id as a string.
# start_date: date object. Includes this date when reading.
# example: datetime.date(2020, 9, 1)
# end_date: date object. Includes this date when reading.
# folder: determines which folder to get data from (MESAN_CSV or LANTMET_CSV).
# folder = True -> MESAN
# folder = False -> LANTMET
# Can also be a string.
# example: folder = 'MESAN' or 'LANTMET'
# @returns comb_df: concatenated dataframe containing all csv data
# chronologically. None if a file was not found.
def read_CSV(stationId, folder, start_date, end_date):
# Used if folder is a string to translate to boolean.
trans_dict = {'MESAN_CSV': True,
'MESAN': True,
'LANTMET_CSV': False,
'LANTMET': False}
# If folder is a string, check if folder is a key in trans_dict.
if isinstance(folder, str):
try:
# folder is assigned a boolean value corresponding to data source.
folder = trans_dict[folder]
except KeyError:
# User provided key not existing in trans_dict.
print('Key \'' + folder + '\' can not be used to specify data source.')
return None
if folder:
station_dir = 'MESAN_CSV/' + stationId + '/'
else:
station_dir = 'LANTMET_CSV/' + stationId + '/'
# Check if dir exists.
if not os.path.isdir(station_dir):
print('read_CSV() >>> No directory: ' + station_dir)
# Loop over days
current_date = start_date
frames = []
for n in range(0, (end_date - start_date + datetime.timedelta(days=1)).days):
date_str = current_date.strftime('%Y-%m-%d')
if folder:
current_file = 'MESAN_' + date_str + '.csv'
else:
current_file = 'LANTMET_' + date_str + '.csv'
# Try to read file, if file not found, return a None object.
try:
frames.append(pd.read_csv(station_dir + current_file))
except IOError as e:
print('read_CSV() >>> File not found. (' + current_file + ')')
return None
current_date = current_date + datetime.timedelta(days=1)
comb_df = pd.concat(frames, ignore_index=True)
return comb_df
###Output
_____no_output_____
###Markdown
Fetch data from LANTMETThis function allows for LANTMET data to be fetched in dataframes for easy data manipulation. Select a station id and time interval.The function returns a complete dataframe stretching from 00:00:00 at start_date to 23:00:00 at end_date.- Deals with bad requests.- Fills any missing timestamps with None objects to ensure continuity between rows.- Chronologically sorted dataframe.
###Code
import requests
import datetime
import pandas as pd
# Get LANTMET parameter data for a selected station over a time interval
# as a pandas dataframe. Missing datapoints is filled to ensure continuity and
# chronological sorting.
# @params id: station id as a string, example: id='149'
# start_date: date object representing earliest date in selected time interval.
# end_date: date object representing latest date in selected time interval.
# @returns pandas dataframe with one column for each timestamp and one
# column per parameter where each row is separated by one hour.
def get_LANTMET(id, start_date, end_date):
start_str = start_date.strftime('%Y-%m-%d')
end_str = end_date.strftime('%Y-%m-%d')
url = 'https://www.ffe.slu.se/lm/json/DownloadJS.cfm?weatherStationID=' + id + '&startDate=' + start_str + '&endDate=' + end_str
# Try accessing API.
try:
r = requests.get(url)
except requests.exceptions.RequestException as e:
# If accessing API fails
print('get_LANTMET() >>> Request failed.\n' + str(e.__str__()))
return None
# If data is not in JSON format, return.
try:
data = r.json()
except json.JSONDecodeError:
print('get_LANTMET() >>> Fetched data is not in JSON format.')
print(r.text)
return None
# Init dict timestamp keys.
tmp_dict = {}
for e in data:
tmp_dict[e['timeMeasured']] = {'Timestamp': e['timeMeasured'].split('+')[0] + 'Z'}
# Add parameter values.
params = {}
for e in data:
tmp_dict[e['timeMeasured']][e['elementMeasurementTypeId']] = e['value']
params[e['elementMeasurementTypeId']] = None
# Check if any timestamps are missing, if so fill with None values for each parameter.
# This also ensures chonologically sorting.
sorted_data = []
current_dt = start_date
for n in range(0, (end_date - start_date + datetime.timedelta(days=1)).days):
for i in range(0, 24):
# Get string representation of hour.
hour_str = ''
if i < 10:
hour_str = '0' + str(i)
else:
hour_str = str(i)
datetime_str = current_dt.strftime('%Y-%m-%d') + 'T' + hour_str + ':00:00'
# Deal with missing timestamps in fetched data.
try:
# Append subdicts to list.
sorted_data.append(tmp_dict[datetime_str + '+01:00'])
except KeyError:
# Timestamp not found in dict. Add one with None values for each param.
print('Missing data for ' + datetime_str + '.')
tmp = {}
tmp['Timestamp'] = datetime_str + 'Z'
for param in params:
tmp[param] = None
sorted_data.append(tmp)
current_dt = current_dt + datetime.timedelta(days=1)
res_df = pd.DataFrame(sorted_data)
return res_df
###Output
_____no_output_____
###Markdown
Demonstration- Fetch data for first seven days of September 2020 from LANTMET.- Load corresponding MESAN data extracted from archived GRIB-files.- Plot some data.
###Code
# Example
import matplotlib.pyplot as plt
import numpy as np
# Select station and time interval.
station = '24688'
start_date = datetime.date(2020, 9, 1)
end_date = datetime.date(2020, 9, 7)
# Load data.
df_LANTMET = get_LANTMET(station, start_date, end_date)
df_MESAN = read_CSV(station, 'MESAN',start_date, end_date)
# Plot individual data and error.
fig, axs = plt.subplots(3, figsize=(15,10))
fig.suptitle('Temperature variation during September week', fontsize=16)
fig.tight_layout(pad=4.0)
hours = [int(x) for x in range(0, df_LANTMET.shape[0])]
# LANTMET DATA
axs[0].plot(hours, df_LANTMET['TM'])
axs[0].xaxis.set_ticks(np.arange(min(hours), max(hours)+1, 24.0))
axs[0].set_ylabel('LANTMET (°C)', fontsize=16)
axs[0].set_autoscale_on(False)
axs[0].vlines(np.arange(min(hours), max(hours)+1, 24.0), 0, 20, linestyles='dotted')
# MESAN DATA
axs[1].plot(hours, df_MESAN['t_sfc'] - 273.15)
axs[1].xaxis.set_ticks(np.arange(min(hours), max(hours)+1, 24.0))
axs[1].set_ylabel('MESAN (°C)', fontsize=16)
axs[1].set_autoscale_on(False)
axs[1].vlines(np.arange(min(hours), max(hours)+1, 24.0), 0, 20, linestyles='dotted')
# ABSOLUTE ERROR
axs[2].plot(hours, abs((df_MESAN['t_sfc'] - 273.15) - df_LANTMET['TM']), 'r')
axs[2].xaxis.set_ticks(np.arange(min(hours), max(hours)+1, 24.0))
axs[2].set_ylabel('ERROR (°C)', fontsize=16)
axs[2].set_autoscale_on(False)
axs[2].vlines(np.arange(min(hours), max(hours)+1, 24.0), 0, 20, linestyles='dotted')
axs[2].set_xlabel('Hours', fontsize=16);
###Output
_____no_output_____ |
doc/pub/BayesianBasics/ipynb/BayesianBasics.ipynb | ###Markdown
Learning from data: Basics of Bayesian Statistics **Christian Forssén**, Department of Physics, Chalmers University of Technology, SwedenDate: **Sep 12, 2019**Copyright 2018-2019, Christian Forssén. Released under CC Attribution-NonCommercial 4.0 license How do you feel about statistics?Disraeli (attr.): > “There are three kinds of lies: lies, damned lies, and statistics.”Rutherford:> “If your result needs a statistician then you should design a better experiment.”Laplace:> “La théorie des probabilités n'est que le bon sens réduit au calcul”Bayesian Methods: rules of statistical inference are an application of the laws of probability Inference * Deductive inference. Cause $\to$ Effect. * Inference to best explanation. Effect $\to$ Cause. * Scientists need a way to: * Quantify the strength of inductive inferences; * Update that quantification as they acquire new data. Some historyAdapted from D.S. Sivia[^Sivia]:[^Sivia]: Sivia, Devinderjit, and John Skilling. Data Analysis : A Bayesian Tutorial, OUP Oxford, 2006> Although the frequency definition appears to be more objective, its range of validity is also far more limited. For example, Laplace used (his) probability theory to estimate the mass of Saturn, given orbital data that were available to him from various astronomical observatories. In essence, he computed the posterior pdf for the mass M , given the data and all the relevant background information I (such as a knowledge of the laws of classical mechanics): prob(M|{data},I); this is shown schematically in the figure [Fig. 1.2].> To Laplace, the (shaded) area under the posterior pdf curve between $m_1$ and $m_2$ was a measure of how much he believed that the mass of Saturn lay in the range $m_1 \le M \le m_2$. As such, the position of the maximum of the posterior pdf represents a best estimate of the mass; its width, or spread, about this optimal value gives an indication of the uncertainty in the estimate. Laplace stated that: ‘ . . . it is a bet of 11,000 to 1 that the error of this result is not 1/100th of its value.’ He would have won the bet, as another 150 years’ accumulation of data has changed the estimate by only 0.63%!> According to the frequency definition, however, we are not permitted to use probability theory to tackle this problem. This is because the mass of Saturn is a constant and not a random variable; therefore, it has no frequency distribution and so probability theory cannot be used.> > If the pdf [of Fig. 1.2] had to be interpreted in terms of the frequency definition, we would have to imagine a large ensemble of universes in which everything remains constant apart from the mass of Saturn.> As this scenario appears quite far-fetched, we might be inclined to think of [Fig. 1.2] in terms of the distribution of the measurements of the mass in many repetitions of the experiment. Although we are at liberty to think about a problem in any way that facilitates its solution, or our understanding of it, having to seek a frequency interpretation for every data analysis problem seems rather perverse.> For example, what do we mean by the ‘measurement of the mass’ when the data consist of orbital periods? Besides, why should we have to think about many repetitions of an experiment that never happened? What we really want to do is to make the best inference of the mass given the (few) data that we actually have; this is precisely the Bayes and Laplace view of probability.> Faced with the realization that the frequency definition of probability theory did not permit most real-life scientific problems to be addressed, a new subject was invented — statistics! To estimate the mass of Saturn, for example, one has to relate the mass to the data through some function called the statistic; since the data are subject to ‘random’ noise, the statistic becomes the random variable to which the rules of probability the- ory can be applied. But now the question arises: How should we choose the statistic? The frequentist approach does not yield a natural way of doing this and has, therefore, led to the development of several alternative schools of orthodox or conventional statis- tics. The masters, such as Fisher, Neyman and Pearson, provided a variety of different principles, which has merely resulted in a plethora of tests and procedures without any clear underlying rationale. This lack of unifying principles is, perhaps, at the heart of the shortcomings of the cook-book approach to statistics that students are often taught even today. Probability density functions (pdf:s) * $p(A|B)$ reads “probability of $A$ given $B$” * Simplest examples are discrete, but physicists often interested in continuous case, e.g., parameter estimation. * When integrated, continuous pdfs become probabilities $\Rightarrow$ pdfs are NOT dimensionless, even though probabilities are. * 68%, 95%, etc. intervals can then be computed by integration * Certainty about a parameter corresponds to $p(x) = \delta(x-x_0)$ Properties of PDFsThere are two properties that all PDFs must satisfy. The first one ispositivity (assuming that the PDF is normalized) $$0 \leq p(x).$$ Naturally, it would be nonsensical for any of the values of the domainto occur with a probability less than $0$. Also,the PDF must be normalized. That is, all the probabilities must add upto unity. The probability of "anything" to happen is always unity. Fordiscrete and continuous PDFs, respectively, this condition is $$\begin{align*}\sum_{x_i\in\mathbb D} p(x_i) & = 1,\\\int_{x\in\mathbb D} p(x)\,dx & = 1.\end{align*}$$ Important distributions, the uniform distributionLet us consider some important, univariate distributions.The first oneis the most basic PDF; namely the uniform distribution $$\begin{equation}p(x) = \frac{1}{b-a}\theta(x-a)\theta(b-x).\label{eq:unifromPDF} \tag{1}\end{equation}$$ For $a=0$ and $b=1$ we have $$p(x) = \left\{\begin{array}{ll}1 & x \in [0,1],\\0 & \mathrm{otherwise}\end{array}\right.$$ Gaussian distributionThe second one is the univariate Gaussian Distribution $$p(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp{(-\frac{(x-\mu)^2}{2\sigma^2})},$$ with mean value $\mu$ and standard deviation $\sigma$. If $\mu=0$ and $\sigma=1$, it is normally called the **standard normal distribution** $$p(x) = \frac{1}{\sqrt{2\pi}} \exp{(-\frac{x^2}{2})},$$ Expectation valuesLet $h(x)$ be an arbitrary continuous function on the domain of the stochasticvariable $X$ whose PDF is $p(x)$. We define the *expectation value*of $h$ with respect to $p$ as follows $$\begin{equation}\langle h \rangle_X \equiv \int\! h(x)p(x)\,dx\label{eq:expectation_value_of_h_wrt_p} \tag{2}\end{equation}$$ Whenever the PDF is known implicitly, like in this case, we will dropthe index $X$ for clarity. A particularly useful class of special expectation values are the*moments*. The $n$-th moment of the PDF $p$ is defined asfollows $$\langle x^n \rangle \equiv \int\! x^n p(x)\,dx$$ Stochastic variables and the main concepts, mean valuesThe zero-th moment $\langle 1\rangle$ is just the normalization condition of$p$. The first moment, $\langle x\rangle$, is called the *mean* of $p$and often denoted by the letter $\mu$ $$\langle x\rangle \equiv \mu = \int x p(x)dx,$$ for a continuous distribution and $$\langle x\rangle \equiv \mu = \sum_{i=1}^N x_i p(x_i),$$ for a discrete distribution. Qualitatively it represents the centroid or the average value of thePDF and is therefore simply called the expectation value of $p(x)$. Mean, median, averageThe values of the **mode**, **mean**, **median** can all be used as point estimates for the "probable" value of $x$. For some pdfs, they will all be the same.The 68%/95% probability regions are shown in dark/light shading. When applied to Bayesian posteriors, these are known as credible intervals or DoBs (degree of belief intervals) or Bayesian confidence intervals. The horizontal extent on the $x$-axis translates into the vertical extent of the error bar or error band for $x$. Stochastic variables and the main concepts, central moments, the varianceA special version of the moments is the set of *central moments*, the n-th central moment defined as $$\langle (x-\langle x\rangle )^n\rangle \equiv \int\! (x-\langle x\rangle)^n p(x)\,dx$$ The zero-th and first central moments are both trivial, equal $1$ and$0$, respectively. But the second central moment, known as the*variance* of $p$, is of particular interest. For the stochasticvariable $X$, the variance is denoted as $\sigma^2_X$ or $\mathrm{Var}(X)$ $$\begin{align*}\sigma^2_X &=\mathrm{Var}(X) = \langle (x-\langle x\rangle)^2\rangle =\int (x-\langle x\rangle)^2 p(x)dx\\& = \int\left(x^2 - 2 x \langle x\rangle^{2} +\langle x\rangle^2\right)p(x)dx\\& = \langle x^2\rangle - 2 \langle x\rangle\langle x\rangle + \langle x\rangle^2\\& = \langle x^2 \rangle - \langle x\rangle^2\end{align*}$$ The square root of the variance, $\sigma =\sqrt{\langle (x-\langle x\rangle)^2\rangle}$ is called the **standard deviation** of $p$. It is the RMS (root-mean-square)value of the deviation of the PDF from its mean value, interpretedqualitatively as the "spread" of $p$ around its mean. Probability Distribution FunctionsThe following table collects properties of probability distribution functions.In our notation we reserve the label $p(x)$ for the probability of a certain event,while $P(x)$ is the cumulative probability. Discrete PDF Continuous PDF Domain $\left\{x_1, x_2, x_3, \dots, x_N\right\}$ $[a,b]$ Probability $p(x_i)$ $p(x)dx$ Cumulative $P_i=\sum_{l=1}^ip(x_l)$ $P(x)=\int_a^xp(t)dt$ Positivity $0 \le p(x_i) \le 1$ $p(x) \ge 0$ Positivity $0 \le P_i \le 1$ $0 \le P(x) \le 1$ Monotonic $P_i \ge P_j$ if $x_i \ge x_j$ $P(x_i) \ge P(x_j)$ if $x_i \ge x_j$ Normalization $P_N=1$ $P(b)=1$ Quick introduction to `scipy.stats`If you google `scipy.stats`, you'll likely get the manual page as the first hit: [https://docs.scipy.org/doc/scipy/reference/stats.html](https://docs.scipy.org/doc/scipy/reference/stats.html). Here you'll find a long list of the continuous and discrete distributions that are available, followed (scroll way down) by many different methods (functions) to extract properties of a distribution (called Summary Statistics) and do many other statistical tasks.Follow the link for any of the distributions (your choice!) to find its mathematical definition, some examples of how to use it, and a list of methods. Some methods of interest to us here: * `mean()` - Mean of the distribution. * `median()` - Median of the distribution. * `pdf(x)` - Value of the probability density function at x. * `rvs(size=numpts)` - generate numpts random values of the pdf. * `interval(alpha)` - Endpoints of the range that contains alpha percent of the distribution. The Bayesian recipeAssess hypotheses by calculating their probabilities $p(H_i | \ldots)$ conditional on known and/or presumed information using the rules of probability theory.Probability Theory Axioms:Product (AND) rule : : $p(A, B | I) = p(A|I) p(B|A, I) = p(B|I)p(A|B,I)$ Should read $p(A,B|I)$ as the probability for propositions $A$ AND $B$ being true given that $I$ is true.Sum (OR) rule: : $p(A + B | I) = p(A | I) + p(B | I) - p(A, B | I)$ $p(A+B|I)$ is the probability that proposition $A$ OR $B$ is true given that $I$ is true.Normalization: : $p(A|I) + p(\bar{A}|I) = 1$ $\bar{A}$ denotes the proposition that $A$ is false. Bayes' theoremBayes' theorem follows directly from the product rule $$p(A|B,I) = \frac{p(B|A,I) p(A|I)}{p(B|I)}.$$ The importance of this property to data analysis becomes apparent if we replace $A$ and $B$ by hypothesis($H$) and data($D$): $$\begin{equation}p(H|D,I) = \frac{p(D|H,I) p(H|I)}{p(D|I)}.\label{eq:bayes} \tag{3}\end{equation}$$ The power of Bayes’ theorem lies in the fact that it relates the quantity of interest, the probability that the hypothesis is true given the data, to the term we have a better chance of being able to assign, the probability that we would have observed the measured data if the hypothesis was true.The various terms in Bayes’ theorem have formal names. * The quantity on the far right, $p(H|I)$, is called the *prior* probability; it represents our state of knowledge (or ignorance) about the truth of the hypothesis before we have analysed the current data. * This is modified by the experimental measurements through $p(D|H,I)$, the *likelihood* function, * The denominator $p(D|I)$ is called the *evidence*. It does not depend on the hypothesis and can be regarded as a normalization constant.* Together, these yield the *posterior* probability, $p(H|D, I )$, representing our state of knowledge about the truth of the hypothesis in the light of the data. In a sense, Bayes’ theorem encapsulates the process of learning. The friends of Bayes' theoremNormalization: : $\sum_i p(H_i|I) = 1$.Marginalization: : $p(A|I) = \sum_i p(H_i|A,I) p(A|I) = \sum_i p(A,H_i|I)$.In the above, $H_i$ is an exclusive and exhaustive list of hypotheses. For example,let’s imagine that there are five candidates in a presidential election; then $H_1$ could be the proposition that the first candidate will win, and so on. The probability that $A$ is true, for example that unemployment will be lower in a year’s time (given all relevant information $I$, but irrespective of whoever becomes president) is given by $\sum_i p(A,H_i|I)$ as shown by using normalization and applying the product rule.Normalization (continuum limit): : $\int dx p(x|I) = 1$.Marginalization (continuum limit): : $p(y|I) = \int dx p(x,y|I)$.In the continuum limit of propositions we must understand $p(\ldots)$ as a pdf (probability density function).Marginalization is a very powerful device in data analysis because it enables us to deal with nuisance parameters; that is, quantities which necessarily enter the analysis but are of no intrinsic interest. The unwanted background signal present in many experimental measurements are examples of nuisance parameters. Example: Is this a fair coin?Let us begin with the analysis of data from a simple coin-tossing experiment. Given that we had observed 6 heads in 8 flips, would you think it was a fair coin? By fair, we mean that we would be prepared to lay an even 1 : 1 bet on the outcome of a flip being a head or a tail. If we decide that the coin was fair, the question which follows naturally is how sure are we that this was so; if it was not fair, how unfair do we think it was? Furthermore, if we were to continue collecting data for this particular coin, observing the outcomes of additional flips, how would we update our belief on the fairness of the coin?A sensible way of formulating this problem is to consider a large number of hypotheses about the range in which the bias-weighting of the coin might lie. If we denote the bias-weighting by $p_H$, then $p_H = 0$ and $p_H = 1$ can represent a coin which produces a tail or a head on every flip, respectively. There is a continuum of possibilities for the value of $p_H$ between these limits, with $p_H = 0.5$ indicating a fair coin. Our state of knowledge about the fairness, or the degree of unfairness, of the coin is then completely summarized by specifying how much we believe these various propositions to be true. Let us perform a computer simulation of a coin-tossing experiment. This provides the data that we will be analysing.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(999) # for reproducibility
pH=0.6 # biased coin
flips=np.random.rand(2**12) # simulates 4096 coin flips
heads=flips<pH # boolean array, heads[i]=True if flip i is heads
###Output
_____no_output_____
###Markdown
In the light of this data, our inference about the fairness of this coin is summarized by the conditional pdf: $p(p_H|D,I)$. This is, of course, shorthand for the limiting case of a continuum of propositions for the value of $p_H$; that is to say, the probability that $p_H$ lies in an infinitesimally narrow range is given by $p(p_H|D,I) dp_H$. To estimate this posterior pdf, we need to use Bayes’ theorem ([3](eq:bayes)). We will ignore the denominator $p(D|I)$ as it does not involve bias-weighting explicitly, and it will therefore not affect the shape of the desired pdf. At the end we can evaluate the missing constant subsequently from the normalization condition $$\begin{equation}\int_0^1 p(p_H|D,I) dp_H = 1.\label{eq:coin_posterior_norm} \tag{4}\end{equation}$$ The prior pdf, $p(p_H|I)$, represents what we know about the coin given only the information $I$ that we are dealing with a ‘strange coin’. We could keep a very open mind about the nature of the coin; a simple probability assignment which reflects this is a uniform, or flat, prior $$\begin{equation}p(p_H|I) = \left\{ \begin{array}{ll}1 & 0 \le p_H \le 1, \\0 & \mathrm{otherwise}.\end{array} \right.\label{eq:coin_prior_uniform} \tag{5}\end{equation}$$ We will get back later to the choice of prior and its effect on the analysis.This prior state of knowledge, or ignorance, is modified by the data through the likelihood function $p(D|p_H,I)$. It is a measure of the chance that we would have obtained the data that we actually observed, if the value of the bias-weighting was given (as known). If, in the conditioning information $I$, we assume that the flips of the coin were independent events, so that the outcome of one did not influence that of another, then the probability of obtaining the data `H heads in N tosses' is given by the binomial distribution (we leave a formal definition of this to a statistics textbook) $$\begin{equation}p(D|p_H,I) \propto p_H^H (1-p_H)^{N-H}.\label{_auto1} \tag{6}\end{equation}$$ It seems reasonable because $p_H$ is the chance of obtaining a head on any flip, and there were $H$ of them, and $1-p_H$ is the corresponding probability for a tail, of which there were $N-H$. We note that this binomial distribution also contains a normalization factor, but we will ignore it since it does not depend explicitly on $p_H$, the quantity of interest. It will be absorbed by the normalization condition ([4](eq:coin_posterior_norm)).We perform the setup of this Bayesian framework on the computer.
###Code
def prior(pH):
p=np.zeros_like(pH)
p[(0<=pH)&(pH<=1)]=1 # allowed range: 0<=pH<=1
return p # uniform prior
def likelihood(pH,data):
N = len(data)
no_of_heads = sum(data)
no_of_tails = N - no_of_heads
return pH**no_of_heads * (1-pH)**no_of_tails
def posterior(pH,data):
p=prior(pH)*likelihood(pH,data)
norm=np.trapz(p,pH)
return p/norm
###Output
_____no_output_____
###Markdown
The next step is to confront this setup with the simulated data. To get a feel for the result, it is instructive to see how the posterior pdf evolves as we obtain more and more data pertaining to the coin. The results of such an analyses is shown in Fig. [fig:coinflipping](fig:coinflipping).
###Code
pH=np.linspace(0,1,1000)
fig, axs = plt.subplots(nrows=4,ncols=3,sharex=True,sharey='row',figsize=(14,14))
axs_vec=np.reshape(axs,-1)
axs_vec[0].plot(pH,prior(pH))
for ndouble in range(11):
ax=axs_vec[1+ndouble]
ax.plot(pH,posterior(pH,heads[:2**ndouble]))
ax.text(0.1, 0.8, '$N={0}$'.format(2**ndouble), transform=ax.transAxes)
for row in range(4): axs[row,0].set_ylabel('$p(p_H|D_\mathrm{obs},I)$')
for col in range(3): axs[-1,col].set_xlabel('$p_H$')
###Output
_____no_output_____ |
ColonizingMars/AccessingData/google-search-trends.ipynb | ###Markdown
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) Google TrendsGoogle Trends has data going back to January 1, 2004 about the frequencies of search terms, which can be imported into a pandas DataFrame using the [pytrends](https://github.com/GeneralMills/pytrends) library.We can use various [methods](https://github.com/GeneralMills/pytrendsapi-methods) such as `interest_over_time()` or `interest_by_region`.
###Code
!pip install --user pytrends
from pytrends.request import TrendReq
import pandas as pd
pytrend = TrendReq()
pytrend.build_payload(kw_list=['Mars', 'Venus'])
df = pytrend.interest_over_time()
df
###Output
_____no_output_____ |
notebooks/Delegation Analysis.ipynb | ###Markdown
Delegation analysis - Munge data to clean diferent Delegation and Colony names- Clean none and not usable data- Compute statistics
###Code
# Imports
import pandas as pd
import numpy as np
# Read data
## Inmuebles 24
inmuebles24 = pd.read_csv("../data/2018-06-03/inmuebles24.csv", delimiter="~")
## Lamudi
## Propiedades
## Segunda Mano
## Trovit
# Concat data
df = pd.concat([inmuebles24])
df.head()
# Munge Location data
df['delegation'] = df[['location']].applymap(lambda x: x.split(',')[-1].strip())
df['colony'] = df[['location']].applymap(lambda x: x.split('-')[-1].split(',')[0].strip())
df['address'] = df[['location']].applymap(lambda x: x.split('-')[0].strip())
df.head()
# Verify Location Stats
print('Number of Total Datapoints:', len(df))
print('Number of Unique Delegations', df.delegation.value_counts().count())
print('Number of Unique Colonies', df.colony.value_counts().count())
print('Stats per Delegation with all Data')
df.groupby('delegation').price.describe().sort_values(by=['mean'],ascending=False)
# Find outliers
outbounds = df.groupby('delegation').price.agg(['mean', 'std']).reset_index()
# Compute Upper and Lower bounds
outbounds['upper'] = outbounds['mean'] + outbounds['std']
outbounds['lower'] = outbounds['mean'] - outbounds['std']
del outbounds['mean'], outbounds['std']
df = pd.merge(df, outbounds, on='delegation', how='inner')
df.head()
# Filter Non outliers
data = df[(df['price'] < df['upper']) & (df['price'] > df['lower'])]
print('Total inliers:', len(data))
data.head(3)
print('Stats per Delegation with inliers data')
data.groupby('delegation').price.describe().sort_values(by=['mean'],ascending=False)
###Output
Stats per Delegation with inliers data
###Markdown
Analysis per Delegation of Interest
###Code
# Compute General statistics in delegational data
by_deleg = data[data['delegation'] == 'Benito Juárez'].copy()
print('Amount of Delegational data:', len(by_deleg))
by_deleg.describe()
print("Prices per Colony")
by_deleg[(by_deleg.price < 5000000) & (by_deleg.price > 3000000)]\
.groupby(['colony']).price.describe().sort_values('std')
def surface_parse(x):
if '-' in x:
return 0.0
x = [float(j.replace(',','').strip()) for j in x.replace('m²','').strip().split('a')]
return min(x)
by_deleg['min_surface'] = by_deleg['surface'].apply(surface_parse)
df['min_surface'] = df['surface'].apply(surface_parse)
# Compute Price per square meter
by_deleg['price_per_sqm'] = by_deleg['price'] / by_deleg['min_surface']
print('Prices per square meter per colony')
by_deleg[by_deleg.min_surface > 80.0].groupby(['colony','rooms']).price_per_sqm.describe().sort_values('mean')
PrecioMinimo = 3500000
SuperficieMinima = 80.0
Colonia = 'Cuauhtémoc'
df[(df.min_surface > SuperficieMinima) \
& (df.price > PrecioMinimo) \
& (df.colony.str.contains(Colonia))]\
.sort_values(by=['price', 'min_surface'], ascending=[True, False])\
[['location', 'price', 'link', 'surface']].drop_duplicates('location').head(50)#.to_dict(orient='records')
# https://www.inmuebles24.com/propiedades/-oportunidad-hasta-fin-de-mes!-54504552.html
# https://www.inmuebles24.com/propiedades/venta-hermosos-ph-hipodromo-condesa-cuauhtemoc-54395302.html
# https://www.inmuebles24.com/propiedades/escandon.-acogedor-y-con-excelente-ubicacion-54105568.html
###Output
_____no_output_____ |
docs/source/application_notebooks/PSF_viewer.ipynb | ###Markdown
Streaming data from micro-manager to napari: PSF Viewerdeveloped by Wiebke Jahr, [Danzl lab], IST Austria, (c) 2020 latest version [on github] If you use this tool, please cite: pycro-manager: Pinkard, H., Stuurman, N., Ivanov, I.E. et al. Pycro-Manager: open-source software for customized and reproducible microscope control. Nat Methods (2021). doi: [10.1038/s41592-021-01087-6] napari: napari contributors (2019). napari: a multi-dimensional image viewer for python. doi: [10.5281/zenodo.3555620] This notebook shows how to acquire data using `micromanager`, then use `pycro-manager` to stream it to `napari`. Buttons to start and stop data acquisition are added to the `napari` window using the `magic-gui` package. In this example, the data displayed in `napari` is resliced to get a live PSF viewer. However, reslicing is only a small example for the data analysis possible using `napari`. Here are two [videos] showing the PSF viewer in action: - PSFViewer-ExternalStageControl_1080p.mp4: z-stage controlled via `micromanager` - PSFViewer-InternalStageControl_1080p.mp4: z-stage controlled via external DAQ control Since the amount of data that can be transferred between `micromanager` and `pycro-manager` is currently limited to 100 MB/s, it's important that no more data is transferred to ensure smooth execution of the software. For both movies, camera acquisition parameters in `micromanager` were set to: - 11-bit depth, - chip-size cropped to the central 512x512 px. - external trigger start (trigger comming at 45 Hz) - exposure time set to 0.01 ms Tested on: - macOS Catalina using `micromanager 2.0.0-gamma1-20210221` [on github]: https://github.com/wiebkejahr/pycro-manager[Danzl lab]: https://danzl-lab.pages.ist.ac.at/[videos]: https://www.dropbox.com/sh/fpr2nitlhfb68od/AAArXxDLclfXWhsyF0x_fP7Ja?dl=0[10.1038/s41592-021-01087-6]: https://doi.org/10.1038/s41592-021-01087-6[10.5281/zenodo.3555620]: https://doi.org/10.5281/zenodo.3555620
###Code
# only execute first time to install all required packages
# has been tested with the indicated package versions
#!pip install pycromanager==0.10.9 napari==0.4.5 pyqt5==5.15.1 magicgui==0.2.5 yappi==1.3.2
# newest: magicgui==0.2.6, but there's an error message when connecting the buttons
# when updating pycromanager, you may have to update micro-manager as well
# when updating magicgui, napari may have to be updated
import time
import numpy as np
import queue
#import yappi # needed for benchmarking multithreaded code
import napari
from napari.qt import thread_worker
from magicgui import magicgui
from pycromanager import Acquisition, multi_d_acquisition_events
# open napari in an extra window
%gui qt
###Output
_____no_output_____
###Markdown
define constantssome constants for microscope parameters and display options global variables for multithreading
###Code
# data acquired on microscope or simulated?
simulate = False
# z-stage controlled through micromanager, or externally?
z_stack_external = False
# clip image to central part. Speeds up display as data size is reduced
# is used as size for simulating data
clip =[128, 128]
# um / px, for correct scaling in napari
size_um = [0.16, 0.16]
# start in um, end in um, number of slices, active slice
z_range = [0, 50, 200, 0]
#z_range = [1100, 1150, 200, 0]
# rescale z dimension independently for display
z_scale = 1
# sleep time to keep software responsive
sleep_time = 0.05
# contrast limits for display
clim = [100, 300]
# number of color channels, active channel
channels = [1, 0]
# color map for display
cmap = ['plasma', 'viridis']
# layer names for the channels
layer_names = ['GFP', 'RFP']
# initialize global variables
# flag to break while loops
acq_running = False
# empty queue for image data and z positions
img_queue = queue.Queue()
# xyz data stack
data = np.random.rand(z_range[2], clip[0], clip[1]) * clim[1]
# if z-stage is controlled through micromanager:
# need bridge to move stage at beginning of stack
# USE WITH CAUTION: only tested with micromanager demo config
if not(simulate) and not(z_stack_external):
from pycromanager import Bridge
bridge = Bridge()
#get object representing micro-manager core
core = bridge.get_core()
print(core)
core.set_position(z_range[0])
###Output
<pycromanager.core.mmcorej_CMMCore object at 0x7fe4f020adf0>
###Markdown
dev_names = core.get_loaded_devices()for ii in range(dev_names.size()): print(ii, dev_names.get(ii))print(core.get_property("Camera", "PixelType"))print(core.get_property("Z", "Label"))stage_xy = core.get_xy_stage_position()pos = [stage_xy.get_x(), stage_xy.get_y()]print(pos)core.set_position(100)print('z stage: ', core.get_position())core.stop('Z') this doesnt work, just continues movingprint('z stage: ', core.get_position()) core.set_position(z_range[0]) this also doesn't worktime.sleep(5)print('z stage: ', core.get_position()) Function to write data into QueueThis function is shared by the image acquisition / simulation routine. Shapes data as needed and keeps track of both z_position and active channel.
###Code
def place_data(image):
""" fnc to place image data into the queue.
Keeps track of z-position in stacks and of active color channels.
Inputs: np.array image: image data
Global variables: image_queue to write image and z position
z_range to keep track of z position
channels to keep track of channels
"""
global img_queue
global z_range
global channels
img_queue.put([channels[1], z_range[3], np.ravel(image)])
z_range[3] = (z_range[3]+1) % z_range[2]
if z_range[3] == 0:
channels[1] = (channels[1]+1) % channels[0]
#print(z_range, channels)
###Output
_____no_output_____
###Markdown
create dummy image and and put into stackcreates dummy image of constant brightness use for testing purposes without microscope stack of increasing brightness helps to identify glitches
###Code
def simulate_image(b, size = [128,128]):
""" fnc to simulate an image of constant brightness
and call fnc to place it into the queue.
Inputs: int b: brightness
np.array size: # of px in image in xy.
"""
place_data(np.ones(size) * b)
def simulate_data(ii, z_range):
""" fnc to create images with constant, but increasing brightness.
Inputs: int ii: counter to increase brightness
int z_range: number of slices in stack"""
for zz in range(z_range[2]):
brightness = (ii+1) * (zz+1) / ((z_range[2]+1)) * clim[1]
simulate_image(brightness, clip)
time.sleep(sleep_time)
# need sleep time especially when simulated datasize is small or this will kill CPU
###Output
_____no_output_____
###Markdown
image process function and pycromanager acquisitiongrabs and clips acquired image built pycromanager acquisition events acquire data and send to image_process_fn
###Code
def grab_image(image, metadata):
""" image_process_fnc to grab image from uManager, clip it to central part
and call the fnc that will put it into the queue.
Inputs: array image: image from micromanager
metadata from micromanager
"""
size = np.shape(image)
image_clipped = image[(size[0]-clip[0])//2:(size[0]+clip[0])//2,
(size[1]-clip[1])//2:(size[1]+clip[1])//2]
place_data(image_clipped)
return image, metadata
def acquire_data(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use custom events, not multi_d_acquisition because the
z-stage is not run from micro-manager but controlled via external DAQ."""
with Acquisition(directory=None, name=None,
show_display=True,
image_process_fn = grab_image) as acq:
events = []
for index, z_um in enumerate(np.linspace(z_range[0], z_range[1], z_range[2])):
evt = {"axes": {"z_ext": index}, "z_ext": z_um}
events.append(evt)
acq.acquire(events)
def acquire_multid(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use multi_d_acquisition because the z-stage is run
from micro-manager.
Unless hardware triggering is set up in micro-manager, this will be fairly slow:
micro-manager does not sweep the z-stage, but acquires plane by plane. """
with Acquisition(directory=None, name=None,
show_display=False,
image_process_fn = grab_image) as acq:
events = multi_d_acquisition_events(z_start=z_range[0], z_end=z_range[1],
z_step=(z_range[1]-z_range[0])/(z_range[2]-1))
acq.acquire(events)
###Output
_____no_output_____
###Markdown
napari update displayis called whenever the thread worker checking the queue yields an image adds images into xyz stack and updates data
###Code
def display_napari(pos_img):
""" Unpacks z position and reshapes image from pos_img. Writes image into correct
slice of data, and updates napari display.
Called by worker thread yielding elements from queue.
Needs to be in code before worker thread connecting to it.
Inputs: array pos_img: queue element containing z position and raveled image data.
Global variables: np.array data: contains image stack
img_queue: needed only to send task_done() signal.
"""
global data
global img_queue
if pos_img is None:
return
# read image and z position
image = np.reshape(pos_img[2:],(clip[0], clip[1]))
z_pos = pos_img[1]
color = pos_img[0]
# write image into correct slice of data and update display
data[z_pos] = np.squeeze(image)
layer = viewer.layers[color]
layer.data = data
#print("updating ", z_pos, color)
img_queue.task_done()
###Output
_____no_output_____
###Markdown
worker threads appending data to queue and reading from queue
###Code
@thread_worker
def append_img(img_queue):
""" Worker thread that adds images to a list.
Calls either micro-manager data acquisition or functions for simulating data.
Inputs: img_queue """
# start microscope data acquisition
if not simulate:
if z_stack_external:
while acq_running:
acquire_data(z_range)
time.sleep(sleep_time)
else:
while acq_running:
acquire_multid(z_range)
time.sleep(sleep_time)
# run with simulated data
else:
ii = 0
while acq_running:
simulate_data(ii, z_range)
ii = ii + 1
#print("appending to queue", ii)
time.sleep(sleep_time)
@thread_worker(connect={'yielded': display_napari})
def yield_img(img_queue):
""" Worker thread that checks whether there are elements in the
queue, reads them out.
Connected to display_napari function to update display """
global acq_running
while acq_running:
time.sleep(sleep_time)
# get elements from queue while there is more than one element
# playing it safe: I'm always leaving one element in the queue
while img_queue.qsize() > 1:
#print("reading from queue ", img_queue.qsize())
yield img_queue.get(block = False)
# read out last remaining elements after end of acquisition
while img_queue.qsize() > 0:
yield img_queue.get(block = False)
print("acquisition done")
###Output
_____no_output_____
###Markdown
define functions to start and stop acquisitionconnect to gui buttons using magic_gui `start_acq` restarts workers, resets `acq_running` flag and resets `z_range[3]`, ie z_pos `stop_acq` sets `acq_running` flag to `False`, which will stop the worker threads
###Code
@magicgui(call_button = "Start")
def start_acq():
""" Called when Start button in pressed. Starts workers and resets global variables"""
print("starting threads...")
global acq_running
global z_range
if not(acq_running):
z_range[3] = 0
acq_running = True
# comment in when benchmarking
#yappi.start()
worker1 = append_img(img_queue)
worker2 = yield_img(img_queue)
worker1.start()
#worker2.start() # doesn't need to be started bc yield is connected
else:
print("acquisition already running!")
@magicgui(call_button = "Stop")
def stop_acq():
print("stopping threads")
# set global acq_running to False to stop other workers
global acq_running
global core
acq_running = False
if not(simulate) and not(z_stack_external):
print('z stage stopping: ', core.get_position())
core.stop("Z") # this doesnt work, just continues moving. eventually micromanager memory overflows
print('z stage stopped: ', core.get_position())
core.set_position(z_range[0]) # this also doesn't work
core.wait_for_device("Z")
#time.sleep(5)
print('z stage zeroed: ', core.get_position())
# comment in when benchmarking
# yappi.stop()
###Output
_____no_output_____
###Markdown
"Main" function: start napari and worker threads(re-)opens napary viewer initializes view with random data sets scale, contrast etc and rolls view. add GUI buttons for start stop there's a glitch when acquisition is stopped and restarted too quickly
###Code
# check if viewer is already open
# if yes: close and reopen
try:
if viewer:
viewer.close()
except:
print("viewer already closed or never opened")
viewer = napari.Viewer(ndisplay=2)
# initialize napari viewer with stack view and random data, reslice view
scale = [(z_range[1]-z_range[0])/z_range[2]*z_scale, size_um[1], size_um[0]]
layers = [viewer.add_image(data,
name = layer_names[c],
colormap = cmap[c],
interpolation = 'nearest',
blending = 'additive',
rendering = 'attenuated_mip',
scale = scale,
contrast_limits = clim)
for c in range(channels[0])]
viewer.dims._roll()
# set sliders to the middle of the stack for all three dimensions.
# doesn't work anymore after fixing scaling
# would have to be done for both layers
#for dd, dim in enumerate(layers[0].data.shape):
# viewer.dims.set_point(dd, dim*scale[2-dd]//2)
# add start stop buttons to napari gui
viewer.window.add_dock_widget(start_acq)
viewer.window.add_dock_widget(stop_acq)
###Output
viewer already closed or never opened
###Markdown
Get output from yappionly needs to be run when benchmarking code
###Code
print('z stage zeroed: ', core.get_position())
bridge.close()
#only needs to be executed when yappi is used
threads = yappi.get_thread_stats()
for thread in threads:
print(
"Function stats for (%s) (%d)" % (thread.name, thread.id)
) # it is the Thread.__class__.__name__
yappi.get_func_stats(ctx_id=thread.id).print_all()
###Output
_____no_output_____
###Markdown
Streaming data from micro-manager to napari: PSF Viewerdeveloped by Wiebke Jahr, [Danzl lab], IST Austria, (c) 2020 latest version [on github] If you use this tool, please cite: pycro-manager: Pinkard, H., Stuurman, N., Ivanov, I.E. et al. Pycro-Manager: open-source software for customized and reproducible microscope control. Nat Methods (2021). doi: [10.1038/s41592-021-01087-6] napari: napari contributors (2019). napari: a multi-dimensional image viewer for python. doi: [10.5281/zenodo.3555620] This notebook shows how to acquire data using `micromanager`, then use `pycro-manager` to stream it to `napari`. Buttons to start and stop data acquisition are added to the `napari` window using the `magic-gui` package. In this example, the data displayed in `napari` is resliced to get a live PSF viewer. However, reslicing is only a small example for the data analysis possible using `napari`. Here are two [videos] showing the PSF viewer in action: - PSFViewer-ExternalStageControl_1080p.mp4: z-stage controlled via `micromanager` - PSFViewer-InternalStageControl_1080p.mp4: z-stage controlled via external DAQ control Since the amount of data that can be transferred between `micromanager` and `pycro-manager` is currently limited to 100 MB/s, it's important that no more data is transferred to ensure smooth execution of the software. For both movies, camera acquisition parameters in `micromanager` were set to: - 11-bit depth, - chip-size cropped to the central 512x512 px. - external trigger start (trigger comming at 45 Hz) - exposure time set to 0.01 ms Tested on: - macOS Catalina using `micromanager 2.0.0-gamma1-20210221` [on github]: https://github.com/wiebkejahr/pycro-manager[Danzl lab]: https://danzl-lab.pages.ist.ac.at/[videos]: https://www.dropbox.com/sh/fpr2nitlhfb68od/AAArXxDLclfXWhsyF0x_fP7Ja?dl=0[10.1038/s41592-021-01087-6]: https://doi.org/10.1038/s41592-021-01087-6[10.5281/zenodo.3555620]: https://doi.org/10.5281/zenodo.3555620
###Code
# only execute first time to install all required packages
# has been tested with the indicated package versions
#!pip install pycromanager==0.10.9 napari==0.4.5 pyqt5==5.15.1 magicgui==0.2.5 yappi==1.3.2
# newest: magicgui==0.2.6, but there's an error message when connecting the buttons
# when updating pycromanager, you may have to update micro-manager as well
# when updating magicgui, napari may have to be updated
import time
import numpy as np
import queue
#import yappi # needed for benchmarking multithreaded code
import napari
from napari.qt import thread_worker
from magicgui import magicgui
from pycromanager import Acquisition, multi_d_acquisition_events
# open napari in an extra window
%gui qt
###Output
_____no_output_____
###Markdown
define constantssome constants for microscope parameters and display options global variables for multithreading
###Code
# data acquired on microscope or simulated?
simulate = False
# z-stage controlled through micromanager, or externally?
z_stack_external = False
# clip image to central part. Speeds up display as data size is reduced
# is used as size for simulating data
clip =[128, 128]
# um / px, for correct scaling in napari
size_um = [0.16, 0.16]
# start in um, end in um, number of slices, active slice
z_range = [0, 50, 200, 0]
#z_range = [1100, 1150, 200, 0]
# rescale z dimension independently for display
z_scale = 1
# sleep time to keep software responsive
sleep_time = 0.05
# contrast limits for display
clim = [100, 300]
# number of color channels, active channel
channels = [1, 0]
# color map for display
cmap = ['plasma', 'viridis']
# layer names for the channels
layer_names = ['GFP', 'RFP']
# initialize global variables
# flag to break while loops
acq_running = False
# empty queue for image data and z positions
img_queue = queue.Queue()
# xyz data stack
data = np.random.rand(z_range[2], clip[0], clip[1]) * clim[1]
# if z-stage is controlled through micromanager:
# need bridge to move stage at beginning of stack
# USE WITH CAUTION: only tested with micromanager demo config
if not(simulate) and not(z_stack_external):
from pycromanager import Bridge
bridge = Bridge()
#get object representing micro-manager core
core = bridge.get_core()
print(core)
core.set_position(z_range[0])
###Output
<pycromanager.core.mmcorej_CMMCore object at 0x7fe4f020adf0>
###Markdown
dev_names = core.get_loaded_devices()for ii in range(dev_names.size()): print(ii, dev_names.get(ii))print(core.get_property("Camera", "PixelType"))print(core.get_property("Z", "Label"))stage_xy = core.get_xy_stage_position()pos = [stage_xy.get_x(), stage_xy.get_y()]print(pos)core.set_position(100)print('z stage: ', core.get_position())core.stop('Z') this doesnt work, just continues movingprint('z stage: ', core.get_position()) core.set_position(z_range[0]) this also doesn't worktime.sleep(5)print('z stage: ', core.get_position()) Function to write data into QueueThis function is shared by the image acquisition / simulation routine. Shapes data as needed and keeps track of both z_position and active channel.
###Code
def place_data(image):
""" fnc to place image data into the queue.
Keeps track of z-position in stacks and of active color channels.
Inputs: np.array image: image data
Global variables: image_queue to write image and z position
z_range to keep track of z position
channels to keep track of channels
"""
global img_queue
global z_range
global channels
img_queue.put([channels[1], z_range[3], np.ravel(image)])
z_range[3] = (z_range[3]+1) % z_range[2]
if z_range[3] == 0:
channels[1] = (channels[1]+1) % channels[0]
#print(z_range, channels)
###Output
_____no_output_____
###Markdown
create dummy image and and put into stackcreates dummy image of constant brightness use for testing purposes without microscope stack of increasing brightness helps to identify glitches
###Code
def simulate_image(b, size = [128,128]):
""" fnc to simulate an image of constant brightness
and call fnc to place it into the queue.
Inputs: int b: brightness
np.array size: # of px in image in xy.
"""
place_data(np.ones(size) * b)
def simulate_data(ii, z_range):
""" fnc to create images with constant, but increasing brightness.
Inputs: int ii: counter to increase brightness
int z_range: number of slices in stack"""
for zz in range(z_range[2]):
brightness = (ii+1) * (zz+1) / ((z_range[2]+1)) * clim[1]
simulate_image(brightness, clip)
time.sleep(sleep_time)
# need sleep time especially when simulated datasize is small or this will kill CPU
###Output
_____no_output_____
###Markdown
image process function and pycromanager acquisitiongrabs and clips acquired image built pycromanager acquisition events acquire data and send to image_process_fn
###Code
def grab_image(image, metadata):
""" image_process_fnc to grab image from uManager, clip it to central part
and call the fnc that will put it into the queue.
Inputs: array image: image from micromanager
metadata from micromanager
"""
size = np.shape(image)
image_clipped = image[(size[0]-clip[0])//2:(size[0]+clip[0])//2,
(size[1]-clip[1])//2:(size[1]+clip[1])//2]
place_data(image_clipped)
return image, metadata
def acquire_data(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use custom events, not multi_d_acquisition because the
z-stage is not run from micro-manager but controlled via external DAQ."""
with Acquisition(directory=None, name=None,
show_display=True,
image_process_fn = grab_image) as acq:
events = []
for index, z_um in enumerate(np.linspace(z_range[0], z_range[1], z_range[2])):
evt = {"axes": {"z_ext": index}, "z_ext": z_um}
events.append(evt)
acq.acquire(events)
def acquire_multid(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use multi_d_acquisition because the z-stage is run
from micro-manager.
Unless hardware triggering is set up in micro-manager, this will be fairly slow:
micro-manager does not sweep the z-stage, but acquires plane by plane. """
with Acquisition(directory=None, name=None,
show_display=False,
image_process_fn = grab_image) as acq:
events = multi_d_acquisition_events(z_start=z_range[0], z_end=z_range[1],
z_step=(z_range[1]-z_range[0])/(z_range[2]-1))
acq.acquire(events)
###Output
_____no_output_____
###Markdown
napari update displayis called whenever the thread worker checking the queue yields an image adds images into xyz stack and updates data
###Code
def display_napari(pos_img):
""" Unpacks z position and reshapes image from pos_img. Writes image into correct
slice of data, and updates napari display.
Called by worker thread yielding elements from queue.
Needs to be in code before worker thread connecting to it.
Inputs: array pos_img: queue element containing z position and raveled image data.
Global variables: np.array data: contains image stack
img_queue: needed only to send task_done() signal.
"""
global data
global img_queue
if pos_img is None:
return
# read image and z position
image = np.reshape(pos_img[2:],(clip[0], clip[1]))
z_pos = pos_img[1]
color = pos_img[0]
# write image into correct slice of data and update display
data[z_pos] = np.squeeze(image)
layer = viewer.layers[color]
layer.data = data
#print("updating ", z_pos, color)
img_queue.task_done()
###Output
_____no_output_____
###Markdown
worker threads appending data to queue and reading from queue
###Code
@thread_worker
def append_img(img_queue):
""" Worker thread that adds images to a list.
Calls either micro-manager data acquisition or functions for simulating data.
Inputs: img_queue """
# start microscope data acquisition
if not simulate:
if z_stack_external:
while acq_running:
acquire_data(z_range)
time.sleep(sleep_time)
else:
while acq_running:
acquire_multid(z_range)
time.sleep(sleep_time)
# run with simulated data
else:
ii = 0
while acq_running:
simulate_data(ii, z_range)
ii = ii + 1
#print("appending to queue", ii)
time.sleep(sleep_time)
@thread_worker(connect={'yielded': display_napari})
def yield_img(img_queue):
""" Worker thread that checks whether there are elements in the
queue, reads them out.
Connected to display_napari function to update display """
global acq_running
while acq_running:
time.sleep(sleep_time)
# get elements from queue while there is more than one element
# playing it safe: I'm always leaving one element in the queue
while img_queue.qsize() > 1:
#print("reading from queue ", img_queue.qsize())
yield img_queue.get(block = False)
# read out last remaining elements after end of acquisition
while img_queue.qsize() > 0:
yield img_queue.get(block = False)
print("acquisition done")
###Output
_____no_output_____
###Markdown
define functions to start and stop acquisitionconnect to gui buttons using magic_gui `start_acq` restarts workers, resets `acq_running` flag and resets `z_range[3]`, ie z_pos `stop_acq` sets `acq_running` flag to `False`, which will stop the worker threads
###Code
@magicgui(call_button = "Start")
def start_acq():
""" Called when Start button in pressed. Starts workers and resets global variables"""
print("starting threads...")
global acq_running
global z_range
if not(acq_running):
z_range[3] = 0
acq_running = True
# comment in when benchmarking
#yappi.start()
worker1 = append_img(img_queue)
worker2 = yield_img(img_queue)
worker1.start()
#worker2.start() # doesn't need to be started bc yield is connected
else:
print("acquisition already running!")
@magicgui(call_button = "Stop")
def stop_acq():
print("stopping threads")
# set global acq_running to False to stop other workers
global acq_running
global core
acq_running = False
if not(simulate) and not(z_stack_external):
print('z stage stopping: ', core.get_position())
core.stop("Z") # this doesnt work, just continues moving. eventually micromanager memory overflows
print('z stage stopped: ', core.get_position())
core.set_position(z_range[0]) # this also doesn't work
core.wait_for_device("Z")
#time.sleep(5)
print('z stage zeroed: ', core.get_position())
# comment in when benchmarking
# yappi.stop()
###Output
_____no_output_____
###Markdown
"Main" function: start napari and worker threads(re-)opens napary viewer initializes view with random data sets scale, contrast etc and rolls view. add GUI buttons for start stop there's a glitch when acquisition is stopped and restarted too quickly
###Code
# check if viewer is already open
# if yes: close and reopen
try:
if viewer:
viewer.close()
except:
print("viewer already closed or never opened")
viewer = napari.Viewer(ndisplay=2)
# initialize napari viewer with stack view and random data, reslice view
scale = [(z_range[1]-z_range[0])/z_range[2]*z_scale, size_um[1], size_um[0]]
layers = [viewer.add_image(data,
name = layer_names[c],
colormap = cmap[c],
interpolation = 'nearest',
blending = 'additive',
rendering = 'attenuated_mip',
scale = scale,
contrast_limits = clim)
for c in range(channels[0])]
viewer.dims._roll()
# set sliders to the middle of the stack for all three dimensions.
# doesn't work anymore after fixing scaling
# would have to be done for both layers
#for dd, dim in enumerate(layers[0].data.shape):
# viewer.dims.set_point(dd, dim*scale[2-dd]//2)
# add start stop buttons to napari gui
viewer.window.add_dock_widget(start_acq)
viewer.window.add_dock_widget(stop_acq)
###Output
viewer already closed or never opened
###Markdown
Get output from yappionly needs to be run when benchmarking code
###Code
print('z stage zeroed: ', core.get_position())
#only needs to be executed when yappi is used
threads = yappi.get_thread_stats()
for thread in threads:
print(
"Function stats for (%s) (%d)" % (thread.name, thread.id)
) # it is the Thread.__class__.__name__
yappi.get_func_stats(ctx_id=thread.id).print_all()
###Output
_____no_output_____
###Markdown
Streaming data from micro-manager to napari: PSF Viewerdeveloped by Wiebke Jahr, [Danzl lab], IST Austria, (c) 2020 latest version [on github] If you use this tool, please cite: pycro-manager: Pinkard, H., Stuurman, N., Ivanov, I.E. et al. Pycro-Manager: open-source software for customized and reproducible microscope control. Nat Methods (2021). doi: [10.1038/s41592-021-01087-6] napari: napari contributors (2019). napari: a multi-dimensional image viewer for python. doi: [10.5281/zenodo.3555620] This notebook shows how to acquire data using `micromanager`, then use `pycro-manager` to stream it to `napari`. Buttons to start and stop data acquisition are added to the `napari` window using the `magic-gui` package. In this example, the data displayed in `napari` is resliced to get a live PSF viewer. However, reslicing is only a small example for the data analysis possible using `napari`. Here are two [videos] showing the PSF viewer in action: - PSFViewer-ExternalStageControl_1080p.mp4: z-stage controlled via `micromanager` - PSFViewer-InternalStageControl_1080p.mp4: z-stage controlled via external DAQ control Since the amount of data that can be transferred between `micromanager` and `pycro-manager` is currently limited to 100 MB/s, it's important that no more data is transferred to ensure smooth execution of the software. For both movies, camera acquisition parameters in `micromanager` were set to: - 11-bit depth, - chip-size cropped to the central 512x512 px. - external trigger start (trigger comming at 45 Hz) - exposure time set to 0.01 ms Tested on: - macOS Catalina using `micromanager 2.0.0-gamma1-20210221` [on github]: https://github.com/wiebkejahr/pycro-manager[Danzl lab]: https://danzl-lab.pages.ist.ac.at/[videos]: https://www.dropbox.com/sh/fpr2nitlhfb68od/AAArXxDLclfXWhsyF0x_fP7Ja?dl=0[10.1038/s41592-021-01087-6]: https://doi.org/10.1038/s41592-021-01087-6[10.5281/zenodo.3555620]: https://doi.org/10.5281/zenodo.3555620
###Code
# only execute first time to install all required packages
# has been tested with the indicated package versions
#!pip install pycromanager==0.10.9 napari==0.4.5 pyqt5==5.15.1 magicgui==0.2.5 yappi==1.3.2
# newest: magicgui==0.2.6, but there's an error message when connecting the buttons
# when updating pycromanager, you may have to update micro-manager as well
# when updating magicgui, napari may have to be updated
import time
import numpy as np
import queue
#import yappi # needed for benchmarking multithreaded code
import napari
from napari.qt import thread_worker
from magicgui import magicgui
from pycromanager import Acquisition, multi_d_acquisition_events
# open napari in an extra window
%gui qt
###Output
_____no_output_____
###Markdown
define constantssome constants for microscope parameters and display options global variables for multithreading
###Code
# data acquired on microscope or simulated?
simulate = False
# z-stage controlled through micromanager, or externally?
z_stack_external = False
# clip image to central part. Speeds up display as data size is reduced
# is used as size for simulating data
clip =[128, 128]
# um / px, for correct scaling in napari
size_um = [0.16, 0.16]
# start in um, end in um, number of slices, active slice
z_range = [0, 50, 200, 0]
#z_range = [1100, 1150, 200, 0]
# rescale z dimension independently for display
z_scale = 1
# sleep time to keep software responsive
sleep_time = 0.05
# contrast limits for display
clim = [100, 300]
# number of color channels, active channel
channels = [1, 0]
# color map for display
cmap = ['plasma', 'viridis']
# layer names for the channels
layer_names = ['GFP', 'RFP']
# initialize global variables
# flag to break while loops
acq_running = False
# empty queue for image data and z positions
img_queue = queue.Queue()
# xyz data stack
data = np.random.rand(z_range[2], clip[0], clip[1]) * clim[1]
# if z-stage is controlled through micromanager:
# need bridge to move stage at beginning of stack
# USE WITH CAUTION: only tested with micromanager demo config
if not(simulate) and not(z_stack_external):
from pycromanager import Bridge
bridge = Bridge()
#get object representing micro-manager core
core = bridge.get_core()
print(core)
core.set_position(z_range[0])
###Output
<pycromanager.core.mmcorej_CMMCore object at 0x7fe4f020adf0>
###Markdown
dev_names = core.get_loaded_devices()for ii in range(dev_names.size()): print(ii, dev_names.get(ii))print(core.get_property("Camera", "PixelType"))print(core.get_property("Z", "Label"))stage_xy = core.get_xy_stage_position()pos = [stage_xy.get_x(), stage_xy.get_y()]print(pos)core.set_position(100)print('z stage: ', core.get_position())core.stop('Z') this doesnt work, just continues movingprint('z stage: ', core.get_position()) core.set_position(z_range[0]) this also doesn't worktime.sleep(5)print('z stage: ', core.get_position()) Function to write data into QueueThis function is shared by the image acquisition / simulation routine. Shapes data as needed and keeps track of both z_position and active channel.
###Code
def place_data(image):
""" fnc to place image data into the queue.
Keeps track of z-position in stacks and of active color channels.
Inputs: np.array image: image data
Global variables: image_queue to write image and z position
z_range to keep track of z position
channels to keep track of channels
"""
global img_queue
global z_range
global channels
img_queue.put([channels[1], z_range[3], np.ravel(image)])
z_range[3] = (z_range[3]+1) % z_range[2]
if z_range[3] == 0:
channels[1] = (channels[1]+1) % channels[0]
#print(z_range, channels)
###Output
_____no_output_____
###Markdown
create dummy image and and put into stackcreates dummy image of constant brightness use for testing purposes without microscope stack of increasing brightness helps to identify glitches
###Code
def simulate_image(b, size = [128,128]):
""" fnc to simulate an image of constant brightness
and call fnc to place it into the queue.
Inputs: int b: brightness
np.array size: # of px in image in xy.
"""
place_data(np.ones(size) * b)
def simulate_data(ii, z_range):
""" fnc to create images with constant, but increasing brightness.
Inputs: int ii: counter to increase brightness
int z_range: number of slices in stack"""
for zz in range(z_range[2]):
brightness = (ii+1) * (zz+1) / ((z_range[2]+1)) * clim[1]
simulate_image(brightness, clip)
time.sleep(sleep_time)
# need sleep time especially when simulated datasize is small or this will kill CPU
###Output
_____no_output_____
###Markdown
image process function and pycromanager acquisitiongrabs and clips acquired image built pycromanager acquisition events acquire data and send to image_process_fn
###Code
def grab_image(image, metadata):
""" image_process_fnc to grab image from uManager, clip it to central part
and call the fnc that will put it into the queue.
Inputs: array image: image from micromanager
metadata from micromanager
"""
size = np.shape(image)
image_clipped = image[(size[0]-clip[0])//2:(size[0]+clip[0])//2,
(size[1]-clip[1])//2:(size[1]+clip[1])//2]
place_data(image_clipped)
return image, metadata
def acquire_data(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use custom events, not multi_d_acquisition because the
z-stage is not run from micro-manager but controlled via external DAQ."""
with Acquisition(directory=None, name=None,
show_display=True,
image_process_fn = grab_image) as acq:
events = []
for index, z_um in enumerate(np.linspace(z_range[0], z_range[1], z_range[2])):
evt = {"axes": {"z_ext": index}, "z_ext": z_um}
events.append(evt)
acq.acquire(events)
def acquire_multid(z_range):
""" micro-manager data acquisition. Creates acquisition events for z-stack.
This example: use multi_d_acquisition because the z-stage is run
from micro-manager.
Unless hardware triggering is set up in micro-manager, this will be fairly slow:
micro-manager does not sweep the z-stage, but acquires plane by plane. """
with Acquisition(directory=None, name=None,
show_display=False,
image_process_fn = grab_image) as acq:
events = multi_d_acquisition_events(z_start=z_range[0], z_end=z_range[1],
z_step=(z_range[1]-z_range[0])/(z_range[2]-1))
acq.acquire(events)
###Output
_____no_output_____
###Markdown
napari update displayis called whenever the thread worker checking the queue yields an image adds images into xyz stack and updates data
###Code
def display_napari(pos_img):
""" Unpacks z position and reshapes image from pos_img. Writes image into correct
slice of data, and updates napari display.
Called by worker thread yielding elements from queue.
Needs to be in code before worker thread connecting to it.
Inputs: array pos_img: queue element containing z position and raveled image data.
Global variables: np.array data: contains image stack
img_queue: needed only to send task_done() signal.
"""
global data
global img_queue
if pos_img is None:
return
# read image and z position
image = np.reshape(pos_img[2:],(clip[0], clip[1]))
z_pos = pos_img[1]
color = pos_img[0]
# write image into correct slice of data and update display
data[z_pos] = np.squeeze(image)
layer = viewer.layers[color]
layer.data = data
#print("updating ", z_pos, color)
img_queue.task_done()
###Output
_____no_output_____
###Markdown
worker threads appending data to queue and reading from queue
###Code
@thread_worker
def append_img(img_queue):
""" Worker thread that adds images to a list.
Calls either micro-manager data acquisition or functions for simulating data.
Inputs: img_queue """
# start microscope data acquisition
if not simulate:
if z_stack_external:
while acq_running:
acquire_data(z_range)
time.sleep(sleep_time)
else:
while acq_running:
acquire_multid(z_range)
time.sleep(sleep_time)
# run with simulated data
else:
ii = 0
while acq_running:
simulate_data(ii, z_range)
ii = ii + 1
#print("appending to queue", ii)
time.sleep(sleep_time)
@thread_worker(connect={'yielded': display_napari})
def yield_img(img_queue):
""" Worker thread that checks whether there are elements in the
queue, reads them out.
Connected to display_napari function to update display """
global acq_running
while acq_running:
time.sleep(sleep_time)
# get elements from queue while there is more than one element
# playing it safe: I'm always leaving one element in the queue
while img_queue.qsize() > 1:
#print("reading from queue ", img_queue.qsize())
yield img_queue.get(block = False)
# read out last remaining elements after end of acquisition
while img_queue.qsize() > 0:
yield img_queue.get(block = False)
print("acquisition done")
###Output
_____no_output_____
###Markdown
define functions to start and stop acquisitionconnect to gui buttons using magic_gui `start_acq` restarts workers, resets `acq_running` flag and resets `z_range[3]`, ie z_pos `stop_acq` sets `acq_running` flag to `False`, which will stop the worker threads
###Code
@magicgui(call_button = "Start")
def start_acq():
""" Called when Start button in pressed. Starts workers and resets global variables"""
print("starting threads...")
global acq_running
global z_range
if not(acq_running):
z_range[3] = 0
acq_running = True
# comment in when benchmarking
#yappi.start()
worker1 = append_img(img_queue)
worker2 = yield_img(img_queue)
worker1.start()
#worker2.start() # doesn't need to be started bc yield is connected
else:
print("acquisition already running!")
@magicgui(call_button = "Stop")
def stop_acq():
print("stopping threads")
# set global acq_running to False to stop other workers
global acq_running
global core
acq_running = False
if not(simulate) and not(z_stack_external):
print('z stage stopping: ', core.get_position())
core.stop("Z") # this doesnt work, just continues moving. eventually micromanager memory overflows
print('z stage stopped: ', core.get_position())
core.set_position(z_range[0]) # this also doesn't work
core.wait_for_device("Z")
#time.sleep(5)
print('z stage zeroed: ', core.get_position())
# comment in when benchmarking
# yappi.stop()
###Output
_____no_output_____
###Markdown
"Main" function: start napari and worker threads(re-)opens napary viewer initializes view with random data sets scale, contrast etc and rolls view. add GUI buttons for start stop there's a glitch when acquisition is stopped and restarted too quickly
###Code
# check if viewer is already open
# if yes: close and reopen
try:
if viewer:
viewer.close()
except:
print("viewer already closed or never opened")
viewer = napari.Viewer(ndisplay=2)
# initialize napari viewer with stack view and random data, reslice view
scale = [(z_range[1]-z_range[0])/z_range[2]*z_scale, size_um[1], size_um[0]]
layers = [viewer.add_image(data,
name = layer_names[c],
colormap = cmap[c],
interpolation = 'nearest',
blending = 'additive',
rendering = 'attenuated_mip',
scale = scale,
contrast_limits = clim)
for c in range(channels[0])]
viewer.dims._roll()
# set sliders to the middle of the stack for all three dimensions.
# doesn't work anymore after fixing scaling
# would have to be done for both layers
#for dd, dim in enumerate(layers[0].data.shape):
# viewer.dims.set_point(dd, dim*scale[2-dd]//2)
# add start stop buttons to napari gui
viewer.window.add_dock_widget(start_acq)
viewer.window.add_dock_widget(stop_acq)
###Output
viewer already closed or never opened
###Markdown
Get output from yappionly needs to be run when benchmarking code
###Code
print('z stage zeroed: ', core.get_position())
#only needs to be executed when yappi is used
threads = yappi.get_thread_stats()
for thread in threads:
print(
"Function stats for (%s) (%d)" % (thread.name, thread.id)
) # it is the Thread.__class__.__name__
yappi.get_func_stats(ctx_id=thread.id).print_all()
###Output
_____no_output_____ |
rl_unplugged/dmlab_r2d2.ipynb | ###Markdown
Copyright 2021 DeepMind Technologies Limited.Licensed under the Apache License, Version 2.0 (the "License"); you may not usethis file except in compliance with the License. You may obtain a copy of theLicense at[https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributedunder the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES ORCONDITIONS OF ANY KIND, either express or implied. See the License for thespecific language governing permissions and limitations under the License. RL Unplugged: Offline R2D2 - DeepMind Lab A Colab example of an Acme R2D2 agent on DeepMind Lab data. Installation External dependencies
###Code
!apt-get install libsdl2-dev
!apt-get install libosmesa6-dev
!apt-get install libffi-dev
!apt-get install gettext
!apt-get install python3-numpy-dev python3-dev
###Output
_____no_output_____
###Markdown
Bazel
###Code
BAZEL_VERSION = '3.6.0'
!wget https://github.com/bazelbuild/bazel/releases/download/{BAZEL_VERSION}/bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!chmod +x bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!./bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!bazel --version
###Output
_____no_output_____
###Markdown
DeepMind Lab
###Code
!git clone https://github.com/deepmind/lab.git
%%writefile lab/bazel/python.BUILD
# Description:
# Build rule for Python and Numpy.
# This rule works for Debian and Ubuntu. Other platforms might keep the
# headers in different places, cf. 'How to build DeepMind Lab' in build.md.
cc_library(
name = "python",
hdrs = select(
{
"@bazel_tools//tools/python:PY3": glob([
"usr/include/python3.6m/*.h",
"usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/*.h",
]),
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
includes = select(
{
"@bazel_tools//tools/python:PY3": [
"usr/include/python3.6m",
"usr/local/lib/python3.6/dist-packages/numpy/core/include",
],
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
visibility = ["//visibility:public"],
)
alias(
name = "python_headers",
actual = ":python",
visibility = ["//visibility:public"],
)
!cd lab && bazel build -c opt --python_version=PY3 //python/pip_package:build_pip_package
!cd lab && ./bazel-bin/python/pip_package/build_pip_package /tmp/dmlab_pkg
!pip install /tmp/dmlab_pkg/deepmind_lab-1.0-py3-none-any.whl --force-reinstall
###Output
_____no_output_____
###Markdown
Python dependencies
###Code
!pip install dm_env
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
# Upgrade to recent commit for latest R2D2 learner.
!pip install --upgrade git+https://github.com/deepmind/acme.git@3dfda9d392312d948906e6c567c7f56d8c911de5
###Output
_____no_output_____
###Markdown
Imports and Utils
###Code
# @title Imports
import copy
import functools
from acme import environment_loop
from acme import specs
from acme.adders import reverb as acme_reverb
from acme.agents.tf import actors
from acme.agents.tf.r2d2 import learning as r2d2
from acme.tf import utils as tf_utils
from acme.tf import networks
from acme.utils import loggers
from acme.wrappers import observation_action_reward
import tree
import deepmind_lab
import dm_env
import numpy as np
import reverb
import sonnet as snt
import tensorflow as tf
import trfl
# @title Environment
_ACTION_MAP = {
0: (0, 0, 0, 1, 0, 0, 0),
1: (0, 0, 0, -1, 0, 0, 0),
2: (0, 0, -1, 0, 0, 0, 0),
3: (0, 0, 1, 0, 0, 0, 0),
4: (-10, 0, 0, 0, 0, 0, 0),
5: (10, 0, 0, 0, 0, 0, 0),
6: (-60, 0, 0, 0, 0, 0, 0),
7: (60, 0, 0, 0, 0, 0, 0),
8: (0, 10, 0, 0, 0, 0, 0),
9: (0, -10, 0, 0, 0, 0, 0),
10: (-10, 0, 0, 1, 0, 0, 0),
11: (10, 0, 0, 1, 0, 0, 0),
12: (-60, 0, 0, 1, 0, 0, 0),
13: (60, 0, 0, 1, 0, 0, 0),
14: (0, 0, 0, 0, 1, 0, 0),
}
class DeepMindLabEnvironment(dm_env.Environment):
"""DeepMind Lab environment."""
def __init__(self, level_name: str, action_repeats: int = 4):
"""Construct environment.
Args:
level_name: DeepMind lab level name (e.g. 'rooms_watermaze').
action_repeats: Number of times the same action is repeated on every
step().
"""
config = dict(fps='30',
height='72',
width='96',
maxAltCameraHeight='1',
maxAltCameraWidth='1',
hasAltCameras='false')
# seekavoid_arena_01 is not part of dmlab30.
if level_name != 'seekavoid_arena_01':
level_name = 'contributed/dmlab30/{}'.format(level_name)
self._lab = deepmind_lab.Lab(level_name, ['RGB_INTERLEAVED'], config)
self._action_repeats = action_repeats
self._reward = 0
def _observation(self):
last_action = getattr(self, '_action', 0)
last_reward = getattr(self, '_reward', 0)
self._last_observation = observation_action_reward.OAR(
observation=self._lab.observations()['RGB_INTERLEAVED'],
action=np.array(last_action, dtype=np.int64),
reward=np.array(last_reward, dtype=np.float32))
return self._last_observation
def reset(self):
self._lab.reset()
return dm_env.restart(self._observation())
def step(self, action):
if not self._lab.is_running():
return dm_env.restart(self.reset())
self._action = action.item()
if self._action not in _ACTION_MAP:
raise ValueError('Action not available')
lab_action = np.array(_ACTION_MAP[self._action], dtype=np.intc)
self._reward = self._lab.step(lab_action, num_steps=self._action_repeats)
if self._lab.is_running():
return dm_env.transition(self._reward, self._observation())
return dm_env.termination(self._reward, self._last_observation)
def observation_spec(self):
return observation_action_reward.OAR(
observation=dm_env.specs.Array(shape=(72, 96, 3), dtype=np.uint8),
action=dm_env.specs.Array(shape=(), dtype=np.int64),
reward=dm_env.specs.Array(shape=(), dtype=np.float32))
def action_spec(self):
return dm_env.specs.DiscreteArray(num_values=15, dtype=np.int64)
# @title Dataset
def _decode_images(pngs):
"""Decode tensor of PNGs."""
decode_rgb_png = functools.partial(tf.io.decode_png, channels=3)
images = tf.map_fn(decode_rgb_png, pngs, dtype=tf.uint8,
parallel_iterations=10)
# [N, 72, 96, 3]
images.set_shape((pngs.shape[0], 72, 96, 3))
return images
def _tf_example_to_step_ds(tf_example: tf.train.Example,
episode_length: int) -> reverb.ReplaySample:
"""Create a Reverb replay sample from a TF example."""
# Parse tf.Example.
def sequence_feature(shape, dtype=tf.float32):
return tf.io.FixedLenFeature(shape=[episode_length] + shape, dtype=dtype)
feature_description = {
'episode_id': tf.io.FixedLenFeature([], tf.int64),
'start_idx': tf.io.FixedLenFeature([], tf.int64),
'episode_return': tf.io.FixedLenFeature([], tf.float32),
'observations_pixels': sequence_feature([], tf.string),
'observations_reward': sequence_feature([]),
# actions are one-hot arrays.
'observations_action': sequence_feature([15]),
'actions': sequence_feature([], tf.int64),
'rewards': sequence_feature([]),
'discounted_rewards': sequence_feature([]),
'discounts': sequence_feature([]),
}
data = tf.io.parse_single_example(tf_example, feature_description)
pixels = _decode_images(data['observations_pixels'])
observation = observation_action_reward.OAR(
observation=pixels,
action=tf.argmax(data['observations_action'],
axis=1, output_type=tf.int64),
reward=data['observations_reward'])
data = acme_reverb.Step(
observation=observation,
action=data['actions'],
reward=data['rewards'],
discount=data['discounts'],
start_of_episode=tf.zeros((episode_length,), tf.bool),
extras={})
# Keys are all zero and probabilities are all one.
info = reverb.SampleInfo(key=tf.zeros((episode_length,), tf.int64),
probability=tf.ones((episode_length,), tf.float32),
table_size=tf.zeros((episode_length,), tf.int64),
priority=tf.ones((episode_length,), tf.float32))
sample = reverb.ReplaySample(info=info, data=data)
return tf.data.Dataset.from_tensor_slices(sample)
def subsequences(step_ds: tf.data.Dataset,
length: int, shift: int = 1
) -> tf.data.Dataset:
"""Dataset of subsequences from a dataset of episode steps."""
window_ds = step_ds.window(length, shift=shift, stride=1)
return window_ds.interleave(_nest_ds).batch(length, drop_remainder=True)
def _nest_ds(nested_ds: tf.data.Dataset) -> tf.data.Dataset:
"""Produces a dataset of nests from a nest of datasets of the same size."""
flattened_ds = tuple(tree.flatten(nested_ds))
zipped_ds = tf.data.Dataset.zip(flattened_ds)
return zipped_ds.map(lambda *x: tree.unflatten_as(nested_ds, x))
def make_dataset(path: str,
episode_length: int,
sequence_length: int,
sequence_shift: int,
num_shards: int = 500) -> tf.data.Dataset:
"""Create dataset of DeepMind Lab sequences."""
filenames = [f'{path}/tfrecord-{i:05d}-of-{num_shards:05d}'
for i in range(num_shards)]
file_ds = tf.data.Dataset.from_tensor_slices(filenames)
file_ds = file_ds.repeat().shuffle(num_shards)
tfrecord_dataset = functools.partial(tf.data.TFRecordDataset,
compression_type='GZIP')
# Dataset of tf.Examples containing full episodes.
example_ds = file_ds.interleave(tfrecord_dataset)
# Dataset of episodes, each represented as a dataset of steps.
_tf_example_to_step_ds_with_length = functools.partial(
_tf_example_to_step_ds, episode_length=episode_length)
episode_ds = example_ds.map(_tf_example_to_step_ds_with_length,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Dataset of sequences.
training_sequences = functools.partial(subsequences, length=sequence_length,
shift=sequence_shift)
return episode_ds.interleave(training_sequences)
###Output
_____no_output_____
###Markdown
Experiment
###Code
# task | episode length | run
# ----------------------------------------------------------------------------
# seekavoid_arena_01 | 301 | training_{0..2}
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.0
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.01
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.1
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.25
# explore_object_rewards_few | 1351 | training_{0..2}
# explore_object_rewards_many | 1801 | training_{0..2}
# rooms_select_nonmatching_object | 181 | training_{0..2}
# rooms_watermaze | 1801 | training_{0..2}
TASK = 'seekavoid_arena_01'
RUN = 'training_0'
EPISODE_LENGTH = 301
BATCH_SIZE = 1
DATASET_PATH = f'gs://rl_unplugged/dmlab/{TASK}/{RUN}'
environment = DeepMindLabEnvironment(TASK, action_repeats=2)
dataset = make_dataset(DATASET_PATH, num_shards=500,
episode_length=EPISODE_LENGTH,
sequence_length=120,
sequence_shift=40)
dataset = dataset.padded_batch(BATCH_SIZE, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Learning
###Code
# Create network.
def process_observations(x):
return x._replace(observation=tf.image.convert_image_dtype(x.observation, tf.float32))
environment_spec = specs.make_environment_spec(environment)
num_actions = environment_spec.actions.maximum + 1
network = snt.DeepRNN([
process_observations,
networks.R2D2AtariNetwork(num_actions=num_actions)
])
tf_utils.create_variables(network, [environment_spec.observations])
# Create a logger.
logger = loggers.TerminalLogger(label='learner', time_delta=1.)
# Create the R2D2 learner.
learner = r2d2.R2D2Learner(
environment_spec=environment_spec,
network=network,
target_network=copy.deepcopy(network),
discount=0.99,
learning_rate=1e-4,
importance_sampling_exponent=0.2,
target_update_period=100,
burn_in_length=0,
sequence_length=120,
store_lstm_state=False,
dataset=dataset,
logger=logger)
for _ in range(5):
learner.step()
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Create a logger.
logger = loggers.TerminalLogger(label='evaluator', time_delta=1.)
# Create evaluation loop.
eval_network = snt.DeepRNN([
network,
lambda q: trfl.epsilon_greedy(q, epsilon=0.4**8).sample(),
])
eval_loop = environment_loop.EnvironmentLoop(
environment=environment,
actor=actors.RecurrentActor(policy_network=eval_network),
logger=logger)
eval_loop.run(2)
###Output
_____no_output_____
###Markdown
Copyright 2021 DeepMind Technologies Limited.Licensed under the Apache License, Version 2.0 (the "License"); you may not usethis file except in compliance with the License. You may obtain a copy of theLicense at[https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributedunder the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES ORCONDITIONS OF ANY KIND, either express or implied. See the License for thespecific language governing permissions and limitations under the License. RL Unplugged: Offline R2D2 - DeepMind Lab A Colab example of an Acme R2D2 agent on DeepMind Lab data. Installation External dependencies
###Code
!apt-get install libsdl2-dev
!apt-get install libosmesa6-dev
!apt-get install libffi-dev
!apt-get install gettext
!apt-get install python3-numpy-dev python3-dev
###Output
_____no_output_____
###Markdown
Bazel
###Code
BAZEL_VERSION = '3.6.0'
!wget https://github.com/bazelbuild/bazel/releases/download/{BAZEL_VERSION}/bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!chmod +x bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!./bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!bazel --version
###Output
_____no_output_____
###Markdown
DeepMind Lab
###Code
!git clone https://github.com/deepmind/lab.git
%%writefile lab/bazel/python.BUILD
# Description:
# Build rule for Python and Numpy.
# This rule works for Debian and Ubuntu. Other platforms might keep the
# headers in different places, cf. 'How to build DeepMind Lab' in build.md.
cc_library(
name = "python",
hdrs = select(
{
"@bazel_tools//tools/python:PY3": glob([
"usr/include/python3.6m/*.h",
"usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/*.h",
]),
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
includes = select(
{
"@bazel_tools//tools/python:PY3": [
"usr/include/python3.6m",
"usr/local/lib/python3.6/dist-packages/numpy/core/include",
],
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
visibility = ["//visibility:public"],
)
alias(
name = "python_headers",
actual = ":python",
visibility = ["//visibility:public"],
)
!cd lab && bazel build -c opt --python_version=PY3 //python/pip_package:build_pip_package
!cd lab && ./bazel-bin/python/pip_package/build_pip_package /tmp/dmlab_pkg
!pip install /tmp/dmlab_pkg/deepmind_lab-1.0-py3-none-any.whl --force-reinstall
###Output
_____no_output_____
###Markdown
Python dependencies
###Code
!pip install dm_env
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
# Upgrade to recent commit for latest R2D2 learner.
!pip install --upgrade git+https://github.com/deepmind/acme.git@3dfda9d392312d948906e6c567c7f56d8c911de5
###Output
_____no_output_____
###Markdown
Imports and Utils
###Code
# @title Imports
import copy
import functools
from acme import environment_loop
from acme import specs
from acme.adders import reverb as acme_reverb
from acme.agents.tf import actors
from acme.agents.tf.r2d2 import learning as r2d2
from acme.tf import utils as tf_utils
from acme.tf import networks
from acme.utils import loggers
from acme.wrappers import observation_action_reward
import tree
import deepmind_lab
import dm_env
import numpy as np
import reverb
import sonnet as snt
import tensorflow as tf
import trfl
# @title Environment
_ACTION_MAP = {
0: (0, 0, 0, 1, 0, 0, 0),
1: (0, 0, 0, -1, 0, 0, 0),
2: (0, 0, -1, 0, 0, 0, 0),
3: (0, 0, 1, 0, 0, 0, 0),
4: (-10, 0, 0, 0, 0, 0, 0),
5: (10, 0, 0, 0, 0, 0, 0),
6: (-60, 0, 0, 0, 0, 0, 0),
7: (60, 0, 0, 0, 0, 0, 0),
8: (0, 10, 0, 0, 0, 0, 0),
9: (0, -10, 0, 0, 0, 0, 0),
10: (-10, 0, 0, 1, 0, 0, 0),
11: (10, 0, 0, 1, 0, 0, 0),
12: (-60, 0, 0, 1, 0, 0, 0),
13: (60, 0, 0, 1, 0, 0, 0),
14: (0, 0, 0, 0, 1, 0, 0),
}
class DeepMindLabEnvironment(dm_env.Environment):
"""DeepMind Lab environment."""
def __init__(self, level_name: str, action_repeats: int = 4):
"""Construct environment.
Args:
level_name: DeepMind lab level name (e.g. 'rooms_watermaze').
action_repeats: Number of times the same action is repeated on every
step().
"""
config = dict(fps='30',
height='72',
width='96',
maxAltCameraHeight='1',
maxAltCameraWidth='1',
hasAltCameras='false')
# seekavoid_arena_01 is not part of dmlab30.
if level_name != 'seekavoid_arena_01':
level_name = 'contributed/dmlab30/{}'.format(level_name)
self._lab = deepmind_lab.Lab(level_name, ['RGB_INTERLEAVED'], config)
self._action_repeats = action_repeats
self._reward = 0
def _observation(self):
last_action = getattr(self, '_action', 0)
last_reward = getattr(self, '_reward', 0)
self._last_observation = observation_action_reward.OAR(
observation=self._lab.observations()['RGB_INTERLEAVED'],
action=np.array(last_action, dtype=np.int64),
reward=np.array(last_reward, dtype=np.float32))
return self._last_observation
def reset(self):
self._lab.reset()
return dm_env.restart(self._observation())
def step(self, action):
if not self._lab.is_running():
return dm_env.restart(self.reset())
self._action = action.item()
if self._action not in _ACTION_MAP:
raise ValueError('Action not available')
lab_action = np.array(_ACTION_MAP[self._action], dtype=np.intc)
self._reward = self._lab.step(lab_action, num_steps=self._action_repeats)
if self._lab.is_running():
return dm_env.transition(self._reward, self._observation())
return dm_env.termination(self._reward, self._last_observation)
def observation_spec(self):
return observation_action_reward.OAR(
observation=dm_env.specs.Array(shape=(72, 96, 3), dtype=np.uint8),
action=dm_env.specs.Array(shape=(), dtype=np.int64),
reward=dm_env.specs.Array(shape=(), dtype=np.float32))
def action_spec(self):
return dm_env.specs.DiscreteArray(num_values=15, dtype=np.int64)
# @title Dataset
def _decode_images(pngs):
"""Decode tensor of PNGs."""
decode_rgb_png = functools.partial(tf.io.decode_png, channels=3)
images = tf.map_fn(decode_rgb_png, pngs, dtype=tf.uint8,
parallel_iterations=10)
# [N, 72, 96, 3]
images.set_shape((pngs.shape[0], 72, 96, 3))
return images
def _tf_example_to_step_ds(tf_example: tf.train.Example,
episode_length: int) -> reverb.ReplaySample:
"""Create a Reverb replay sample from a TF example."""
# Parse tf.Example.
def sequence_feature(shape, dtype=tf.float32):
return tf.io.FixedLenFeature(shape=[episode_length] + shape, dtype=dtype)
feature_description = {
'episode_id': tf.io.FixedLenFeature([], tf.int64),
'start_idx': tf.io.FixedLenFeature([], tf.int64),
'episode_return': tf.io.FixedLenFeature([], tf.float32),
'observations_pixels': sequence_feature([], tf.string),
'observations_reward': sequence_feature([]),
# actions are one-hot arrays.
'observations_action': sequence_feature([15]),
'actions': sequence_feature([], tf.int64),
'rewards': sequence_feature([]),
'discounted_rewards': sequence_feature([]),
'discounts': sequence_feature([]),
}
data = tf.io.parse_single_example(tf_example, feature_description)
pixels = _decode_images(data['observations_pixels'])
observation = observation_action_reward.OAR(
observation=pixels,
action=tf.argmax(data['observations_action'],
axis=1, output_type=tf.int64),
reward=data['observations_reward'])
data = acme_reverb.Step(
observation=observation,
action=data['actions'],
reward=data['rewards'],
discount=data['discounts'],
start_of_episode=tf.zeros((episode_length,), tf.bool),
extras={})
# Keys are all zero and probabilities are all one.
info = reverb.SampleInfo(key=tf.zeros((episode_length,), tf.int64),
probability=tf.ones((episode_length,), tf.float32),
table_size=tf.zeros((episode_length,), tf.int64),
priority=tf.ones((episode_length,), tf.float32))
sample = reverb.ReplaySample(info=info, data=data)
return tf.data.Dataset.from_tensor_slices(sample)
def subsequences(step_ds: tf.data.Dataset,
length: int, shift: int = 1
) -> tf.data.Dataset:
"""Dataset of subsequences from a dataset of episode steps."""
window_ds = step_ds.window(length, shift=shift, stride=1)
return window_ds.interleave(_nest_ds).batch(length, drop_remainder=True)
def _nest_ds(nested_ds: tf.data.Dataset) -> tf.data.Dataset:
"""Produces a dataset of nests from a nest of datasets of the same size."""
flattened_ds = tuple(tree.flatten(nested_ds))
zipped_ds = tf.data.Dataset.zip(flattened_ds)
return zipped_ds.map(lambda *x: tree.unflatten_as(nested_ds, x))
def make_dataset(path: str,
episode_length: int,
sequence_length: int,
sequence_shift: int,
num_shards: int = 500) -> tf.data.Dataset:
"""Create dataset of DeepMind Lab sequences."""
filenames = [f'{path}/tfrecord-{i:05d}-of-{num_shards:05d}'
for i in range(num_shards)]
file_ds = tf.data.Dataset.from_tensor_slices(filenames)
file_ds = file_ds.repeat().shuffle(num_shards)
tfrecord_dataset = functools.partial(tf.data.TFRecordDataset,
compression_type='GZIP')
# Dataset of tf.Examples containing full episodes.
example_ds = file_ds.interleave(tfrecord_dataset)
# Dataset of episodes, each represented as a dataset of steps.
_tf_example_to_step_ds_with_length = functools.partial(
_tf_example_to_step_ds, episode_length=episode_length)
episode_ds = example_ds.map(_tf_example_to_step_ds_with_length,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Dataset of sequences.
training_sequences = functools.partial(subsequences, length=sequence_length,
shift=sequence_shift)
return episode_ds.interleave(training_sequences)
###Output
_____no_output_____
###Markdown
Experiment
###Code
# task | episode length | run
# ----------------------------------------------------------------------------
# seekavoid_arena_01 | 301 | training_{0..2}
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.0
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.01
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.1
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.25
# explore_object_rewards_few | 1351 | training_{0..2}
# explore_object_rewards_many | 1801 | training_{0..2}
# rooms_select_nonmatching_object | 181 | training_{0..2}
# rooms_watermaze | 1801 | training_{0..2}
TASK = 'seekavoid_arena_01'
RUN = 'training_0'
EPISODE_LENGTH = 301
BATCH_SIZE = 1
DATASET_PATH = f'gs://rl_unplugged/dmlab/{TASK}/{RUN}'
environment = DeepMindLabEnvironment(TASK, action_repeats=2)
dataset = make_dataset(DATASET_PATH, num_shards=500,
episode_length=EPISODE_LENGTH,
sequence_length=120,
sequence_shift=40)
dataset = dataset.padded_batch(BATCH_SIZE, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Learning
###Code
# Create network.
def process_observations(x):
return x._replace(observation=tf.image.convert_image_dtype(x.observation, tf.float32))
environment_spec = specs.make_environment_spec(environment)
num_actions = environment_spec.actions.maximum + 1
network = snt.DeepRNN([
process_observations,
networks.R2D2AtariNetwork(num_actions=num_actions)
])
tf_utils.create_variables(network, [environment_spec.observations])
# Create a logger.
logger = loggers.TerminalLogger(label='learner', time_delta=1.)
# Create the R2D2 learner.
learner = r2d2.R2D2Learner(
environment_spec=environment_spec,
network=network,
target_network=copy.deepcopy(network),
discount=0.99,
learning_rate=1e-4,
importance_sampling_exponent=0.2,
target_update_period=100,
burn_in_length=0,
sequence_length=120,
store_lstm_state=False,
dataset=dataset,
logger=logger)
for _ in range(5):
learner.step()
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Create a logger.
logger = loggers.TerminalLogger(label='evaluator', time_delta=1.)
# Create evaluation loop.
eval_network = snt.DeepRNN([
network,
lambda q: trfl.epsilon_greedy(q, epsilon=0.4**8).sample(),
])
eval_loop = environment_loop.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedRecurrentActor(policy_network=eval_network),
logger=logger)
eval_loop.run(2)
###Output
_____no_output_____ |
python3/notebooks/text-preprocessing-post/tokenize-with-whitespace.ipynb | ###Markdown
create functions
###Code
def _tokenize(input_str):
# to split on: ' ', ':', '!', '%', '?', ',', '.', '/'
to_split = r"(?u)(?:\s|(\:)|(!)|(%)|(\?)|(,)|(\.)|(\/))"
tokenized_parts = [tok for tok in re.split(to_split, input_str) if tok]
return " ".join(tokenized_parts)
def _remove_duplicated_whitespace(input_str):
return re.sub(r'\s+', ' ', input_str)
def preprocess(input_str):
output_str = input_str
output_str = _tokenize(output_str)
output_str = _remove_duplicated_whitespace(output_str)
output_str = output_str.strip(" ")
return output_str
###Output
_____no_output_____
###Markdown
test
###Code
preprocess("foo! bar")
preprocess("foo!baz.")
preprocess("foo!bar,baz.")
###Output
_____no_output_____
###Markdown
We purposefully did not add `-` to the splitter characters so it isn't split on that
###Code
preprocess("foo-bar")
###Output
_____no_output_____ |
src/probing-tasks/reproduce/probing_tasks_all_badges.ipynb | ###Markdown
**Probing Tasks - How Does BERT Answer Questions?**In this notebook, we will carry out the following badges:**0.** implement the `jiant`'s pipeline: **1.** reproduce the probing tasks: (with bert-base and bert-finetuned) * NEL, REL, COREF on the Ontonotes dataset * QUES on TREC-10 dataset * SUP on the Squad dataset * SUP on the Babi dataset * ~~SUP on the Hotpot dataset~~: *We don't do this probing task because for this task we need to train bert-large. It might take very long time to train.***2.** experiment with BERT base uncased trained on Adversarial dataset**3.** experiment with Roberta-base model on a task **0. Implement `jiant's` pipeline**--- `jiant` has a new training pipeline to facilitate modern experimental workflows (see report). For further use, we implement a convenient method to complete the whole pipeline and to train the model on probing task. **0.1 Use modified `jiant` library**We modified some codes of the [original jiant library](https://github.com/nyu-mll/jiant) in order to achieve our desired functions that aren't supported by `jiant` e.g probing each layer. For more details please see our pdf report.First, we will clone the modified jiant and install libraries we need for this code.
###Code
!git clone https://github.com/SwiftPredator/How-Does-Bert-Answer-QA-DLP2021.git
# copy the modified jiant lib to the /content/
!mv "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/jiant" "/content/"
%cd jiant
!pip install -r requirements-no-torch.txt
!pip install --no-deps -e ./
!pip install gdown # lib to download file from googlde drive link
###Output
Cloning into 'How-Does-Bert-Answer-QA-DLP2021'...
remote: Enumerating objects: 1494, done.[K
remote: Counting objects: 100% (130/130), done.[K
remote: Compressing objects: 100% (94/94), done.[K
remote: Total 1494 (delta 66), reused 78 (delta 33), pack-reused 1364[K
Receiving objects: 100% (1494/1494), 164.54 MiB | 19.26 MiB/s, done.
Resolving deltas: 100% (731/731), done.
Checking out files: 100% (467/467), done.
/content/jiant
Collecting attrs==19.3.0
Downloading attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: bs4==0.0.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements-no-torch.txt (line 2)) (0.0.1)
Collecting jsonnet==0.15.0
Downloading jsonnet-0.15.0.tar.gz (255 kB)
[K |████████████████████████████████| 255 kB 5.3 MB/s
[?25hCollecting lxml==4.6.3
Downloading lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3 MB)
[K |████████████████████████████████| 6.3 MB 8.3 MB/s
[?25hCollecting datasets==1.1.2
Downloading datasets-1.1.2-py3-none-any.whl (147 kB)
[K |████████████████████████████████| 147 kB 51.3 MB/s
[?25hCollecting nltk>=3.5
Downloading nltk-3.6.2-py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 50.1 MB/s
[?25hCollecting numexpr==2.7.1
Downloading numexpr-2.7.1-cp37-cp37m-manylinux1_x86_64.whl (162 kB)
[K |████████████████████████████████| 162 kB 48.0 MB/s
[?25hCollecting numpy==1.18.4
Downloading numpy-1.18.4-cp37-cp37m-manylinux1_x86_64.whl (20.2 MB)
[K |████████████████████████████████| 20.2 MB 1.9 MB/s
[?25hCollecting pandas==1.0.3
Downloading pandas-1.0.3-cp37-cp37m-manylinux1_x86_64.whl (10.0 MB)
[K |████████████████████████████████| 10.0 MB 47.2 MB/s
[?25hCollecting python-Levenshtein==0.12.0
Downloading python-Levenshtein-0.12.0.tar.gz (48 kB)
[K |████████████████████████████████| 48 kB 4.3 MB/s
[?25hCollecting sacremoses==0.0.43
Downloading sacremoses-0.0.43.tar.gz (883 kB)
[K |████████████████████████████████| 883 kB 35.6 MB/s
[?25hCollecting seqeval==0.0.12
Downloading seqeval-0.0.12.tar.gz (21 kB)
Requirement already satisfied: scikit-learn==0.22.2.post1 in /usr/local/lib/python3.7/dist-packages (from -r requirements-no-torch.txt (line 13)) (0.22.2.post1)
Requirement already satisfied: scipy==1.4.1 in /usr/local/lib/python3.7/dist-packages (from -r requirements-no-torch.txt (line 14)) (1.4.1)
Collecting sentencepiece==0.1.91
Downloading sentencepiece-0.1.91-cp37-cp37m-manylinux1_x86_64.whl (1.1 MB)
[K |████████████████████████████████| 1.1 MB 49.8 MB/s
[?25hCollecting tokenizers==0.10.1
Downloading tokenizers-0.10.1-cp37-cp37m-manylinux2010_x86_64.whl (3.2 MB)
[K |████████████████████████████████| 3.2 MB 48.0 MB/s
[?25hCollecting tqdm==4.46.0
Downloading tqdm-4.46.0-py2.py3-none-any.whl (63 kB)
[K |████████████████████████████████| 63 kB 2.2 MB/s
[?25hCollecting transformers==4.5.0
Downloading transformers-4.5.0-py3-none-any.whl (2.1 MB)
[K |████████████████████████████████| 2.1 MB 63.9 MB/s
[?25hRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from bs4==0.0.1->-r requirements-no-torch.txt (line 2)) (4.6.3)
Requirement already satisfied: pyarrow>=0.17.1 in /usr/local/lib/python3.7/dist-packages (from datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (3.0.0)
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (2.23.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (3.0.12)
Requirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (0.70.12.2)
Collecting xxhash
Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)
[K |████████████████████████████████| 243 kB 60.0 MB/s
[?25hRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (0.3.4)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.7/dist-packages (from pandas==1.0.3->-r requirements-no-torch.txt (line 9)) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas==1.0.3->-r requirements-no-torch.txt (line 9)) (2018.9)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from python-Levenshtein==0.12.0->-r requirements-no-torch.txt (line 10)) (57.2.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from sacremoses==0.0.43->-r requirements-no-torch.txt (line 11)) (2019.12.20)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses==0.0.43->-r requirements-no-torch.txt (line 11)) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses==0.0.43->-r requirements-no-torch.txt (line 11)) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses==0.0.43->-r requirements-no-torch.txt (line 11)) (1.0.1)
Requirement already satisfied: Keras>=2.2.4 in /usr/local/lib/python3.7/dist-packages (from seqeval==0.0.12->-r requirements-no-torch.txt (line 12)) (2.4.3)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==4.5.0->-r requirements-no-torch.txt (line 18)) (21.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers==4.5.0->-r requirements-no-torch.txt (line 18)) (4.6.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from Keras>=2.2.4->seqeval==0.0.12->-r requirements-no-torch.txt (line 12)) (3.13)
Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from Keras>=2.2.4->seqeval==0.0.12->-r requirements-no-torch.txt (line 12)) (3.1.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets==1.1.2->-r requirements-no-torch.txt (line 5)) (2021.5.30)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py->Keras>=2.2.4->seqeval==0.0.12->-r requirements-no-torch.txt (line 12)) (1.5.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers==4.5.0->-r requirements-no-torch.txt (line 18)) (3.5.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers==4.5.0->-r requirements-no-torch.txt (line 18)) (3.7.4.3)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==4.5.0->-r requirements-no-torch.txt (line 18)) (2.4.7)
[33mWARNING: The candidate selected for download or install is a yanked version: 'python-levenshtein' candidate (version 0.12.0 at https://files.pythonhosted.org/packages/42/a9/d1785c85ebf9b7dfacd08938dd028209c34a0ea3b1bcdb895208bd40a67d/python-Levenshtein-0.12.0.tar.gz#sha256=033a11de5e3d19ea25c9302d11224e1a1898fe5abd23c61c7c360c25195e3eb1 (from https://pypi.org/simple/python-levenshtein/))
Reason for being yanked: Insecure, upgrade to 0.12.1[0m
Building wheels for collected packages: jsonnet, python-Levenshtein, sacremoses, seqeval
Building wheel for jsonnet (setup.py) ... [?25l[?25hdone
Created wheel for jsonnet: filename=jsonnet-0.15.0-cp37-cp37m-linux_x86_64.whl size=3320405 sha256=faf51e274bdd919ff60b4267f7d3b0bbf3e14329700d5389e75b710f7cc88578
Stored in directory: /root/.cache/pip/wheels/21/01/e4/6fabcb0c191f51e98452f2af6cb2086f0f1cec94a2c0ce9948
Building wheel for python-Levenshtein (setup.py) ... [?25l[?25hdone
Created wheel for python-Levenshtein: filename=python_Levenshtein-0.12.0-cp37-cp37m-linux_x86_64.whl size=145915 sha256=bfcd5722b2e783ac01e1ff982faf2da29c695b8800b5624cf3f61732556378dd
Stored in directory: /root/.cache/pip/wheels/f0/9b/13/49c281164c37be18343230d3cd0fca29efb23a493351db0009
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-py3-none-any.whl size=893251 sha256=3d7321dffb5e02db1a893be4999eaad6d0f32f57b804ec8b375c7ce61a94dd05
Stored in directory: /root/.cache/pip/wheels/69/09/d1/bf058f7d6fa0ecba2ce7c66be3b8d012beb4bf61a6e0c101c0
Building wheel for seqeval (setup.py) ... [?25l[?25hdone
Created wheel for seqeval: filename=seqeval-0.0.12-py3-none-any.whl size=7434 sha256=b862ed9529c0bf8a62a26384b4e07ca4b1007f3409a7cc71ad75e640f04a1ed3
Stored in directory: /root/.cache/pip/wheels/dc/cc/62/a3b81f92d35a80e39eb9b2a9d8b31abac54c02b21b2d466edc
Successfully built jsonnet python-Levenshtein sacremoses seqeval
Installing collected packages: numpy, tqdm, xxhash, tokenizers, sacremoses, pandas, transformers, seqeval, sentencepiece, python-Levenshtein, numexpr, nltk, lxml, jsonnet, datasets, attrs
Attempting uninstall: numpy
Found existing installation: numpy 1.19.5
Uninstalling numpy-1.19.5:
Successfully uninstalled numpy-1.19.5
Attempting uninstall: tqdm
Found existing installation: tqdm 4.41.1
Uninstalling tqdm-4.41.1:
Successfully uninstalled tqdm-4.41.1
Attempting uninstall: pandas
Found existing installation: pandas 1.1.5
Uninstalling pandas-1.1.5:
Successfully uninstalled pandas-1.1.5
Attempting uninstall: numexpr
Found existing installation: numexpr 2.7.3
Uninstalling numexpr-2.7.3:
Successfully uninstalled numexpr-2.7.3
Attempting uninstall: nltk
Found existing installation: nltk 3.2.5
Uninstalling nltk-3.2.5:
Successfully uninstalled nltk-3.2.5
Attempting uninstall: lxml
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Attempting uninstall: attrs
Found existing installation: attrs 21.2.0
Uninstalling attrs-21.2.0:
Successfully uninstalled attrs-21.2.0
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.5.0 requires numpy~=1.19.2, but you have numpy 1.18.4 which is incompatible.
kapre 0.3.5 requires numpy>=1.18.5, but you have numpy 1.18.4 which is incompatible.
google-colab 1.0.0 requires pandas~=1.1.0; python_version >= "3.0", but you have pandas 1.0.3 which is incompatible.
fbprophet 0.7.1 requires pandas>=1.0.4, but you have pandas 1.0.3 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.[0m
Successfully installed attrs-19.3.0 datasets-1.1.2 jsonnet-0.15.0 lxml-4.6.3 nltk-3.6.2 numexpr-2.7.1 numpy-1.18.4 pandas-1.0.3 python-Levenshtein-0.12.0 sacremoses-0.0.43 sentencepiece-0.1.91 seqeval-0.0.12 tokenizers-0.10.1 tqdm-4.46.0 transformers-4.5.0 xxhash-2.0.2
###Markdown
Restart runtime after installing libs **0.2 Download Edge Probing data**After preprocessing and generating the Edge Probing data for all the tasks (see report for details), these data was uploaded to our github and will be used here. Next, we will create the corresponding task configs.Because the tasks QUES and SUP are not supported by jiant, we added new task QUES to the jiant library (see report). The task SUP has the same jiant format structure as COREF, therefore we will reuse the default COREF task in jiant to probe SUP task.
###Code
%cd /content/jiant
import jiant.utils.python.io as py_io
import jiant.utils.display as display
import os
def init_task_config(task_name, size):
jiant_task = task_name
if(task_name == "sup-squad" or task_name == "sup-babi"):
jiant_task = "coref" # use coref task to probe supporting facts task because of the analog structure of jiant EP json format
os.makedirs("/content/tasks/configs/", exist_ok=True)
os.makedirs(f"/content/tasks/data/{task_name}", exist_ok=True)
py_io.write_json({
"task": jiant_task,
"paths": {
"train": f"/content/tasks/data/{task_name}/{size}/train.jsonl",
"val": f"/content/tasks/data/{task_name}/{size}/val.jsonl",
},
"name": jiant_task
}, f"/content/tasks/configs/{task_name}_config.json")
task_names = [
#"ner",
#"semeval",
"coref",
#"ques"
#"sup-squad",
#"sup-babi",
#"sup-hotpot",
]
size = "test" # small, medium or big
for task_name in task_names:
init_task_config(task_name, size)
# copy the task data to the tasks folder created above
!cp -r "/content/How-Does-Bert-Answer-QA-DLP2021/src/probing-tasks/data" "/content/tasks"
###Output
_____no_output_____
###Markdown
**0.3 Download BERT models**Next, we download the models we want to train, for example a `bert-base-uncased` and a `bert-base-uncased-squad-v1` model
###Code
import jiant.proj.main.export_model as export_model
models = [
"bert-base-uncased",
"csarron/bert-base-uncased-squad-v1"
]
for model in models:
export_model.export_model(
hf_pretrained_model_name_or_path=model,
output_base_path=f"/content/models/{model}",
)
###Output
_____no_output_____
###Markdown
**0.4 Tokenize and cache**With the model and data ready, we can now tokenize and cache the inputs features for our task. This converts the input examples to tokenized features ready to be consumed by the model, and saved them to disk in chunks.
###Code
import jiant.shared.caching as caching
import jiant.proj.main.tokenize_and_cache as tokenize_and_cache
seq_length_options = {
"ner": 128,
"semeval": 128,
"coref": 128,
"ques": 128,
"sup-squad": 384,
"sup-babi": 384,
"sup-hotpot": 384,
}
# Tokenize and cache each task
def tokenize(task_name, model):
tokenize_and_cache.main(tokenize_and_cache.RunConfiguration(
task_config_path=f"/content/tasks/configs/{task_name}_config.json",
hf_pretrained_model_name_or_path=model,
output_dir=f"/content/cache/{task_name}",
phases=["train", "val"],
max_seq_length=seq_length_options[task_name],
))
for task_name in task_names:
for model in models:
tokenize(task_name, model)
###Output
CorefTask
[train]: /content/tasks/data/coref/test/train.jsonl
[val]: /content/tasks/data/coref/test/val.jsonl
###Markdown
We can inspect the first examples of the first chunk of each task.
###Code
row = caching.ChunkedFilesDataCache(f"/content/cache/{task_names[0]}/train").load_chunk(0)[0]["data_row"]
print(row.input_ids)
print(row.tokens)
print(row.spans)
print(row.tokens[row.spans[0][0]: row.spans[0][1]+1])
#print(row.tokens[row.spans[1][0]: row.spans[1][1]+1])
###Output
[ 101 24111 1005 1055 2197 2420 1998 2054 2003 2056 2002 2001
2066 102 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0]
['[CLS]', 'saddam', "'", 's', 'last', 'days', 'and', 'what', 'is', 'said', 'he', 'was', 'like', '[SEP]']
[[ 1 3]
[10 10]]
['saddam', "'", 's']
###Markdown
**0.5 Write a run config**Here we are going to write what we call a `jiant_task_config`. This configuration file basically defines a lot of the subtleties of our training pipeline, such as what tasks we will train on, do evaluation on, batch size for each task. We use a helper `Configurator` to write out a `jiant_task_container_config`.
###Code
import jiant.proj.main.scripts.configurator as configurator
def create_jiant_task_config(task_name):
jiant_run_config = configurator.SimpleAPIMultiTaskConfigurator(
task_config_base_path="/content/tasks/configs",
task_cache_base_path="/content/cache",
train_task_name_list=[task_name],
val_task_name_list=[task_name],
train_batch_size=16,
eval_batch_size=32,
epochs=5,
num_gpus=1,
).create_config()
os.makedirs("/content/tasks/run_configs/", exist_ok=True)
py_io.write_json(jiant_run_config, f"/content/tasks/run_configs/{task_name}_run_config.json")
#display.show_json(jiant_run_config)
###Output
_____no_output_____
###Markdown
**0.6 Write the training function**The last step is to train the model on the probing tasks. We create a function that allows us to configure the training process through the parameters e.g which probing task, which model and number of layers you want to train.
###Code
import jiant.proj.main.runscript as main_runscript
def run_probing_task(task_name, model_name="bert-base-uncased", num_layers=1, bin_model_path=""):
hf_model_name = model_name
if(model_name == "bert-babi"):
hf_model_name = "bert-base-uncased"
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task_name}_run_config.json",
output_dir=f"/content/tasks/runs/{task_name}",
hf_pretrained_model_name_or_path=hf_model_name,
model_path=f"/content/models/{model_name}/model/model.p",
model_config_path=f"/content/models/{model_name}/model/config.json",
learning_rate=1e-2,
eval_every_steps=1000,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=num_layers,
bin_model_path=bin_model_path,
)
return main_runscript.run_loop(run_args)
###Output
_____no_output_____
###Markdown
We sum up all the steps above in a convenient method that do the whole probing pipeline. After probing we extract the macro averaged F1 to prepare for the visualization.
###Code
# the whole jiant pipeline
def probe(model, task_name, n_layers, dataset_size):
init_task_config(task_name, dataset_size)
tokenize(task_name, model)
create_jiant_task_config(task_name)
probing_output = run_probing_task(task_name, model, n_layers)
f1_macro = str(probing_output[task_name]["metrics"]["minor"]["f1_macro"])
return f1_macro
###Output
_____no_output_____
###Markdown
**0.7 Write the visualization**
###Code
import os
import json
import matplotlib.pyplot as plt
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
def plot_task(model, task, linestyle, f1_result):
y = [float(f1_result[model][task][n_layers]['f1_macro']) for n_layers in num_layers]
plt.plot(num_layers, y,linestyle, label=model)
plt.grid(True)
plt.legend(loc='lower center', bbox_to_anchor=(0.5, -0.5))
plt.suptitle(task)
plt.xlim(0, 12)
def plot_task_from_file(input_path, model, task, linestyle, f1_result):
with open(input_path, "r") as f:
f1_result = json.load(f)
y = [float(f1_result[model][task][n_layers]['f1_macro']) for n_layers in num_layers]
plt.plot(num_layers, y,linestyle, label=model)
plt.legend()
plt.suptitle(task)
model_to_linestyle = {
"bert-base-uncased": ":g",
"csarron/bert-base-uncased-squad-v1": "-y",
"bert-babi": "b",
"bert-adversarial": "r",
"roberta-base": "m",
}
###Output
_____no_output_____
###Markdown
**1. Reproduce the probing tasks**--- **1.0 Download and export models** Now we can start training the models on the probing tasks like the paper did, and then visualize the results. Download the bert models from huggingface web. Skip this step If these models was already downloaded.
###Code
import jiant.proj.main.export_model as export_model
models = [
"bert-base-uncased",
"csarron/bert-base-uncased-squad-v1"
]
for model in models:
export_model.export_model(
hf_pretrained_model_name_or_path=model,
output_base_path=f"/content/models/{model}",
)
###Output
Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at csarron/bert-base-uncased-squad-v1 were not used when initializing BertForPreTraining: ['qa_outputs.weight', 'qa_outputs.bias']
- This IS expected if you are initializing BertForPreTraining from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForPreTraining from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForPreTraining were not initialized from the model checkpoint at csarron/bert-base-uncased-squad-v1 and are newly initialized: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
###Markdown
Download and export bert babi model provided by Betty (https://cloud.beuth-hochschule.de/index.php/s/X8NN6BaZA3Wg7JW)
###Code
babi_id_url = "1nl8Hft8isOmocwjZ-ulAvIfwifAyMTtY"
os.makedirs("/content/babi-bin-betty/", exist_ok=True)
!gdown --id $babi_id_url -O "/content/babi-bin-betty/babi.bin"
babi_model_path = "/content/babi-bin-betty/babi.bin"
model_name = "bert-babi"
export_model.export_model(
hf_pretrained_model_name_or_path="bert-base-uncased",
bin_model_path=babi_model_path,
output_base_path=f"/content/models/{model_name}",
)
###Output
Downloading...
From: https://drive.google.com/uc?id=1nl8Hft8isOmocwjZ-ulAvIfwifAyMTtY
To: /content/babi-bin-betty/babi.bin
438MB [00:06, 70.2MB/s]
###Markdown
**1.1 NER Task**
###Code
ner_results = {}
task = "ner"
dataset_size = "small"
bert_base_model = "bert-base-uncased"
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
bert_babi_model = "bert-babi"
###Output
_____no_output_____
###Markdown
1.1.1 Train `bert-base` model
###Code
bert_base_model = "bert-base-uncased"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ner_results[bert_base_model] = {}
ner_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
ner_results[bert_base_model][task][n_layers] = {}
ner_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
ner_results[bert_base_model] = {
'ner': {
1: {'f1_macro': '0.3268712768712769'},
3: {'f1_macro': '0.3961069023569024'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.1.2 Train `bert-base-finetuned-squad` model
###Code
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
num_layers = list(range(7, 13, 2)) # from 1 to 12 layers
ner_results[bert_squad_model] = {}
ner_results[bert_squad_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_squad_model, task, n_layers, dataset_size) # get f1_macro after probing
ner_results[bert_squad_model][task][n_layers] = {}
ner_results[bert_squad_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
ner_results[bert_squad_model] = {
'ner': {
1: {'f1_macro': '0.3846153846153846'},
3: {'f1_macro': '0.42857142857142855'},
5: {'f1_macro': '0.42857142857142855'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.1.3 Train `bert-base-finetuned-babi` model
###Code
bert_babi_model = "bert-babi"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ner_results[bert_babi_model] = {}
ner_results[bert_babi_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
probing_output = run_probing_task(task, bert_babi_model, n_layers, bin_model_path=babi_model_path)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
ner_results[bert_babi_model][task][n_layers] = {}
ner_results[bert_babi_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
ner_results[bert_babi_model] = {
'ner': {
1: {'f1_macro': '0.36396805106482527'},
3: {'f1_macro': '0.2668458781362007'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.1.4 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ner_results)
plt.show()
###Output
_____no_output_____
###Markdown
**1.2 SEMEVAL (aka REL) Task**
###Code
semeval_results = {}
task = "semeval"
dataset_size = "small"
###Output
_____no_output_____
###Markdown
1.2.1 Train `bert-base` model
###Code
# Probe SEMEVAL task with bert-base and bert-squad and plot macro f1 score
bert_base_model = "bert-base-uncased"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
semeval_results[bert_base_model] = {}
semeval_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
semeval_results[bert_base_model][task][n_layers] = {}
semeval_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
semeval_results[bert_base_model] = {
'semeval': {
1: {'f1_macro': '0.021052631578947368'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.2.2 Train `bert-base-finetuned-squad` model
###Code
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
num_layers = list(range(7, 13, 2)) # from 1 to 12 layers
semeval_results[bert_squad_model] = {}
semeval_results[bert_squad_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_squad_model, task, n_layers, dataset_size) # get f1_macro after probing
semeval_results[bert_squad_model][task][n_layers] = {}
semeval_results[bert_squad_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
semeval_results[bert_squad_model] = {
'semeval': {
1: {'f1_macro': '0.17449392712550607'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.2.3 Train `bert-base-finetuned-babi` model
###Code
bert_babi_model = "bert-babi"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
semeval_results[bert_babi_model] = {}
semeval_results[bert_babi_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
probing_output = run_probing_task(task, bert_babi_model, n_layers, bin_model_path=babi_model_path)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
semeval_results[bert_babi_model][task][n_layers] = {}
semeval_results[bert_babi_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
semeval_results[bert_babi_model] = {
'semeval': {
1: {'f1_macro': '0.16478696741854637'},
3: {'f1_macro': '0.011695906432748537'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.2.4 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], semeval_results)
plt.show()
###Output
_____no_output_____
###Markdown
**1.3 COREF Task**
###Code
coref_results = {}
task = "coref"
dataset_size = "small"
###Output
_____no_output_____
###Markdown
1.3.1 Train `bert-base` model
###Code
bert_base_model = "bert-base-uncased"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
coref_results[bert_base_model] = {}
coref_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
coref_results[bert_base_model][task][n_layers] = {}
coref_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_base_model = "bert-base-uncased"
coref_results[bert_base_model] = {
'coref': {
1: {'f1_macro': '0.7942326490713587'},
3: {'f1_macro': '0.4074074074074074'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.3.2 Train `bert-base-finetuned-squad` model
###Code
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
num_layers = list(range(5, 13, 2)) # from 1 to 12 layers
coref_results[bert_squad_model] = {}
coref_results[bert_squad_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_squad_model, task, n_layers, dataset_size) # get f1_macro after probing
coref_results[bert_squad_model][task][n_layers] = {}
coref_results[bert_squad_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
coref_results[bert_squad_model] = {
'coref': {
1: {'f1_macro': '0.7834101382488479'},
3: {'f1_macro': '0.3928571428571429'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.3928571428571429'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.3.3 Train `bert-base-finetuned-babi` model
###Code
bert_babi_model = "bert-babi"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
coref_results[bert_babi_model] = {}
coref_results[bert_babi_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
probing_output = run_probing_task(task, bert_babi_model, n_layers, bin_model_path=babi_model_path)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
coref_results[bert_babi_model][task][n_layers] = {}
coref_results[bert_babi_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_babi_model = "bert-babi"
coref_results[bert_babi_model] = {
'coref': {
1: {'f1_macro': '0.746031746031746'},
3: {'f1_macro': '0.45022194039315155'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.3.4 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], coref_results)
plt.show()
###Output
_____no_output_____
###Markdown
**1.4 QUES Task**
###Code
ques_results = {}
task = "ques"
dataset_size = "small"
###Output
_____no_output_____
###Markdown
1.4.1 Train `bert-base` model
###Code
bert_base_model = "bert-base-uncased"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ques_results[bert_base_model] = {}
ques_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
ques_results[bert_base_model][task][n_layers] = {}
ques_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_base_model = "bert-base-uncased"
ques_results[bert_base_model] = {
'ques': {
1: {'f1_macro': '0.0833234303822539'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.4.2 Train `bert-base-finetuned-squad` model
###Code
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ques_results[bert_squad_model] = {}
ques_results[bert_squad_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_squad_model, task, n_layers, dataset_size) # get f1_macro after probing
ques_results[bert_squad_model][task][n_layers] = {}
ques_results[bert_squad_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
ques_results[bert_squad_model] = {
'ques': {
1: {'f1_macro': '0.0404040404040404'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.4.3 Train `bert-base-finetuned-babi` model
###Code
bert_babi_model = "bert-babi"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ques_results[bert_babi_model] = {}
ques_results[bert_babi_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
probing_output = run_probing_task(task, bert_babi_model, n_layers, bin_model_path=babi_model_path)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
ques_results[bert_babi_model][task][n_layers] = {}
ques_results[bert_babi_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_babi_model = "bert-babi"
ques_results[bert_babi_model] = {
'ques': {
1: {'f1_macro': '0.0665478312537136'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
1.4.4 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ques_results)
plt.show()
###Output
_____no_output_____
###Markdown
**1.5 SUP-SQUAD Task**
###Code
sup_squad_results = {}
task = "sup-squad"
dataset_size = "test"
###Output
_____no_output_____
###Markdown
1.5.1 Train `bert-base` model
###Code
bert_base_model = "bert-base-uncased"
num_layers = list(range(9, 13, 2)) # from 1 to 12 layers
sup_squad_results[bert_base_model] = {}
sup_squad_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
sup_squad_results[bert_base_model][task][n_layers] = {}
sup_squad_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_base_model = "bert-base-uncased"
sup_squad_results[bert_base_model] = {
'sup-squad': {
1: {'f1_macro': '0.3846153846153846'},
3: {'f1_macro': '0.3333333333333333'},
5: {'f1_macro': '0.42857142857142855'},
7: {'f1_macro': '0.42857142857142855'},
9: {'f1_macro': '0.42857142857142855'},
11: {'f1_macro': '0.42857142857142855'}
}
}
###Output
_____no_output_____
###Markdown
1.5.2 Train `bert-base-finetuned-squad` model
###Code
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
sup_squad_results[bert_squad_model] = {}
sup_squad_results[bert_squad_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_squad_model, task, n_layers, dataset_size) # get f1_macro after probing
sup_squad_results[bert_squad_model][task][n_layers] = {}
sup_squad_results[bert_squad_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_squad_model = "csarron/bert-base-uncased-squad-v1"
sup_squad_results[bert_squad_model] = {
'sup-squad': {
1: {'f1_macro': '0.3846153846153846'},
3: {'f1_macro': '0.42857142857142855'},
5: {'f1_macro': '0.42857142857142855'},
7: {'f1_macro': '0.42857142857142855'},
9: {'f1_macro': '0.42857142857142855'},
11: {'f1_macro': '0.42857142857142855'}
}
}
###Output
_____no_output_____
###Markdown
1.5.3 Visualization
###Code
models = [bert_base_model, bert_squad_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], sup_squad_results)
plt.show()
###Output
_____no_output_____
###Markdown
**1.6 SUP-BABI Task**
###Code
sup_babi_results = {}
task = "sup-babi"
dataset_size = "test"
###Output
_____no_output_____
###Markdown
1.6.1 Train `bert-base` model
###Code
bert_base_model = "bert-base-uncased"
num_layers = list(range(11, 13, 2)) # from 1 to 12 layers
sup_babi_results[bert_base_model] = {}
sup_babi_results[bert_base_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(bert_base_model, task, n_layers, dataset_size) # get f1_macro after probing
sup_babi_results[bert_base_model][task][n_layers] = {}
sup_babi_results[bert_base_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_base_model = "bert-base-uncased"
sup_babi_results[bert_base_model] = {
'sup-babi': {
1: {'f1_macro': '0.7700578990901572'},
3: {'f1_macro': '0.47058823529411764'},
5: {'f1_macro': '0.47058823529411764'},
7: {'f1_macro': '0.47058823529411764'},
9: {'f1_macro': '0.47058823529411764'},
11: {'f1_macro': '0.47058823529411764'}
}
}
###Output
_____no_output_____
###Markdown
1.6.2 Train `bert-base-finetuned-babi` model
###Code
bert_babi_model = "bert-babi"
num_layers = list(range(11, 13, 2)) # from 1 to 12 layers
sup_babi_results[bert_babi_model] = {}
sup_babi_results[bert_babi_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
probing_output = run_probing_task(task, bert_babi_model, n_layers, bin_model_path=babi_model_path)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
sup_babi_results[bert_babi_model][task][n_layers] = {}
sup_babi_results[bert_babi_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_babi_model = "bert-babi"
sup_babi_results[bert_babi_model] = {
'sup-babi': {
1: {'f1_macro': '0.6228571428571429'},
3: {'f1_macro': '0.47058823529411764'},
5: {'f1_macro': '0.47058823529411764'},
7: {'f1_macro': '0.47058823529411764'},
9: {'f1_macro': '0.47058823529411764'},
11: {'f1_macro': '0.47058823529411764'}
}
}
###Output
_____no_output_____
###Markdown
1.6.3 Visualization
###Code
models = [bert_base_model, bert_babi_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], sup_babi_results)
plt.show()
###Output
###Markdown
**2. Experiment with `Bert-base-uncased` trained on `AdversarialQA` dataset**--- **2.0 Download and export Bert finetuned Adversarial dataset model**
###Code
import os
bert_adversarial_model = "bert-adversarial"
bert_adversarial_path = "/content/bert-adversarial"
os.makedirs(bert_adversarial_path, exist_ok=True)
# google drive id of all files included in bert-base-adversarial
config_json_id = "19-gobmJ_8PWuQURkHZ1JvEtublNS29Cd"
pytorch_model_bin_id = "1G8WurpHMyfk14nHgaZ9b6CKPCi3HdbQc"
special_tokens_map_id = "1GZI4j31ejNFYlEJfgW70bs1cV6He_LEX"
tokenizer_config_id = "1qhSB_QoGL1Kel_2OwAXHP5uTCkPAW4Pl"
tokenizer_json_id = "1wWRk5BdPBwGUd6X3halFrDWq_KqPCVe3"
training_args_id = "1bHZqsV08OjsN6n4Gax1luwF5Rcn4e3AG"
vocab_id = "1aAYV6W5isBQe2T09QbkAkYGibyEsXDyE"
!gdown --id "{config_json_id}" -O "{bert_adversarial_path}/config.json"
!gdown --id "{pytorch_model_bin_id}" -O "{bert_adversarial_path}/pytorch_model.bin"
!gdown --id "{special_tokens_map_id}" -O "{bert_adversarial_path}/special_tokens_map.json"
!gdown --id "{tokenizer_config_id}" -O "{bert_adversarial_path}/tokenizer_config.json"
!gdown --id "{tokenizer_json_id}" -O "{bert_adversarial_path}/tokenizer.json"
!gdown --id "{training_args_id}" -O "{bert_adversarial_path}/training_args.bin"
!gdown --id "{vocab_id}" -O "{bert_adversarial_path}/vocab.txt"
export_model.export_model(
hf_pretrained_model_name_or_path=bert_adversarial_path,
output_base_path=f"/content/models/{bert_adversarial_model}",
)
###Output
_____no_output_____
###Markdown
**2.1 NER Task** 2.1.1 Train
###Code
task = "ner"
dataset_size = "small"
ner_results = {}
bert_adversarial_model = "bert-adversarial"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ner_results[bert_adversarial_model] = {}
ner_results[bert_adversarial_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task}_run_config.json",
output_dir=f"/content/tasks/runs/{task}",
hf_pretrained_model_name_or_path=bert_adversarial_path,
model_path=f"/content/models/{bert_adversarial_model}/model/model.p",
model_config_path=f"/content/models/{bert_adversarial_model}/model/config.json",
learning_rate=1e-3,
eval_every_steps=1000,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=n_layers,
)
probing_output = main_runscript.run_loop(run_args)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
ner_results[bert_adversarial_model][task][n_layers] = {}
ner_results[bert_adversarial_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_adversarial_model = "bert-adversarial"
ner_results[bert_adversarial_model] = {
'ner': {
1: {'f1_macro': '0.10784313725490197'},
3: {'f1_macro': '0.0971861471861472'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.14662822557559402'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
2.1.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ner_results)
plt.show()
###Output
_____no_output_____
###Markdown
**2.2 SEMEVAL Task** 2.2.1 Train
###Code
task = "semeval"
dataset_size = "small"
bert_adversarial_model = "bert-adversarial"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
semeval_results[bert_adversarial_model] = {}
semeval_results[bert_adversarial_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task}_run_config.json",
output_dir=f"/content/tasks/runs/{task}",
hf_pretrained_model_name_or_path=bert_adversarial_path,
model_path=f"/content/models/{bert_adversarial_model}/model/model.p",
model_config_path=f"/content/models/{bert_adversarial_model}/model/config.json",
learning_rate=1e-3,
eval_every_steps=1000,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=n_layers,
)
probing_output = main_runscript.run_loop(run_args)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
semeval_results[bert_adversarial_model][task][n_layers] = {}
semeval_results[bert_adversarial_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_adversarial_model = "bert-adversarial"
semeval_results[bert_adversarial_model] = {
'semeval': {
1: {'f1_macro': '0.07832080200501253'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
2.2.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], semeval_results)
plt.show()
###Output
_____no_output_____
###Markdown
**2.3 COREF Task** 2.3.1 Train
###Code
task = "coref"
dataset_size = "small"
bert_adversarial_model = "bert-adversarial"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
coref_results[bert_adversarial_model] = {}
coref_results[bert_adversarial_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task}_run_config.json",
output_dir=f"/content/tasks/runs/{task}",
hf_pretrained_model_name_or_path=bert_adversarial_path,
model_path=f"/content/models/{bert_adversarial_model}/model/model.p",
model_config_path=f"/content/models/{bert_adversarial_model}/model/config.json",
learning_rate=1e-3,
eval_every_steps=1000,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=n_layers,
)
probing_output = main_runscript.run_loop(run_args)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
coref_results[bert_adversarial_model][task][n_layers] = {}
coref_results[bert_adversarial_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_adversarial_model = "bert-adversarial"
coref_results[bert_adversarial_model] = {
'coref': {
1: {'f1_macro': '0.5064102564102564'},
3: {'f1_macro': '0.37499999999999994'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
2.3.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], coref_results)
plt.show()
###Output
_____no_output_____
###Markdown
**2.4 QUES Task** 2.4.1 Train
###Code
task = "ques"
dataset_size = "small"
ques_results = {}
bert_adversarial_model = "bert-adversarial"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ques_results[bert_adversarial_model] = {}
ques_results[bert_adversarial_model][task] = {}
for n_layers in num_layers:
init_task_config(task, dataset_size)
tokenize(task, "bert-base-uncased") # use tokenizer of bert base
create_jiant_task_config(task)
run_args = main_runscript.RunConfiguration(
jiant_task_container_config_path=f"/content/tasks/run_configs/{task}_run_config.json",
output_dir=f"/content/tasks/runs/{task}",
hf_pretrained_model_name_or_path=bert_adversarial_path,
model_path=f"/content/models/{bert_adversarial_model}/model/model.p",
model_config_path=f"/content/models/{bert_adversarial_model}/model/config.json",
learning_rate=1e-3,
eval_every_steps=1000,
do_train=True,
do_val=True,
do_save=True,
force_overwrite=True,
num_hidden_layers=n_layers,
)
probing_output = main_runscript.run_loop(run_args)
f1_macro = str(probing_output[task]["metrics"]["minor"]["f1_macro"])
ques_results[bert_adversarial_model][task][n_layers] = {}
ques_results[bert_adversarial_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
bert_adversarial_model = "bert-adversarial"
ques_results[bert_adversarial_model] = {
'ques': {
1: {'f1_macro': '0.0'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
2.4.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ques_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3. Experiment with `Roberta-base`**--- Download and export Roberta-base model
###Code
import jiant.proj.main.export_model as export_model
roberta_model = "roberta-base"
export_model.export_model(
hf_pretrained_model_name_or_path=roberta_model,
output_base_path=f"/content/models/{roberta_model}",
)
###Output
_____no_output_____
###Markdown
**3.1 NER Task** 3.1.1 Train
###Code
task = "ner"
dataset_size = "small"
ner_results = {}
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ner_results[roberta_model] = {}
ner_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
ner_results[roberta_model][task][n_layers] = {}
ner_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
ner_results[roberta_model] = {
'ner': {
1: {'f1_macro': '0.08040177260694373'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
3.1.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, roberta_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ner_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3.2 SEMEVAL Task** 3.2.1 Train
###Code
task = "semeval"
dataset_size = "small"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
semeval_results[roberta_model] = {}
semeval_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
semeval_results[roberta_model][task][n_layers] = {}
semeval_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
semeval_results[roberta_model] = {
'semeval': {
1: {'f1_macro': '0.0'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
3.2.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, bert_adversarial_model, roberta_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], semeval_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3.3 COREF Task** 3.3.1 Train
###Code
task = "coref"
dataset_size = "small"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
coref_results[roberta_model] = {}
coref_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
coref_results[roberta_model][task][n_layers] = {}
coref_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
coref_results[roberta_model] = {
'coref': {
1: {'f1_macro': '0.5810381355932204'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.44525547445255476'},
7: {'f1_macro': '0.44525547445255476'},
9: {'f1_macro': '0.44525547445255476'},
11: {'f1_macro': '0.3777777777777777'}
}
}
###Output
_____no_output_____
###Markdown
3.3.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, roberta_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], coref_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3.4 QUES Task** 3.4.1 Train
###Code
task = "ques"
dataset_size = "small"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
ques_results[roberta_model] = {}
ques_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
ques_results[roberta_model][task][n_layers] = {}
ques_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
ques_results[roberta_model] = {
'ques': {
1: {'f1_macro': '0.0'},
3: {'f1_macro': '0.0'},
5: {'f1_macro': '0.0'},
7: {'f1_macro': '0.0'},
9: {'f1_macro': '0.0'},
11: {'f1_macro': '0.0'}
}
}
###Output
_____no_output_____
###Markdown
3.4.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, bert_babi_model, roberta_model, bert_adversarial_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], ques_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3.5 SUP-SQUAD Task** 3.5.1 Train
###Code
task = "sup-squad"
dataset_size = "small"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
sup_squad_results[roberta_model] = {}
sup_squad_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
sup_squad_results[roberta_model][task][n_layers] = {}
sup_squad_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
sup_squad_results[roberta_model] = {
'sup-squad': {
1: {'f1_macro': '0.5017421602787456'},
3: {'f1_macro': '0.44525547445255476'},
5: {'f1_macro': '0.44525547445255476'},
7: {'f1_macro': '0.44525547445255476'},
9: {'f1_macro': '0.44525547445255476'},
11: {'f1_macro': '0.44525547445255476'}
}
}
###Output
_____no_output_____
###Markdown
3.5.2 Visualization
###Code
models = [bert_base_model, bert_squad_model, roberta_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], sup_squad_results)
plt.show()
###Output
_____no_output_____
###Markdown
**3.6 SUP-BABI Task** 3.6.1 Train
###Code
task = "sup-babi"
dataset_size = "test"
num_layers = list(range(1, 13, 2)) # from 1 to 12 layers
sup_babi_results[roberta_model] = {}
sup_babi_results[roberta_model][task] = {}
for n_layers in num_layers:
f1_macro = probe(roberta_model, task, n_layers, dataset_size) # get f1_macro after probing
sup_babi_results[roberta_model][task][n_layers] = {}
sup_babi_results[roberta_model][task][n_layers]['f1_macro'] = f1_macro # save f1 macro for plotting
roberta_model = "roberta-base"
sup_babi_results[roberta_model] = {
'sup-babi': {
1: {'f1_macro': '0.4666666666666667'},
3: {'f1_macro': '0.4666666666666667'},
5: {'f1_macro': '0.4666666666666667'},
7: {'f1_macro': '0.4666666666666667'},
9: {'f1_macro': '0.4666666666666667'},
11: {'f1_macro': '0.4666666666666667'}
}
}
###Output
_____no_output_____
###Markdown
3.6.2 Visualization
###Code
models = [bert_base_model, bert_babi_model, roberta_model]
for model in models:
plot_task(model, task, model_to_linestyle[model], sup_babi_results)
plt.show()
###Output
_____no_output_____ |
3blue1brown/ODE.ipynb | ###Markdown
[Differential equations, studying the unsolvable](https://www.youtube.com/watch?v=p_di4Zn4wz4)
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
θ: 角度ω: 角速度α: 角加速度g: 重力加速度L: 擺長μ: 空氣阻力比g sinθ = L αdθ = ω * dtdω = α * dtα = -g * np.sin(θ) - (μ * ω)
###Code
fig = plt.figure(figsize=(6, 6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
ax.axis([-1.2, 1.2, -1.4, 1.2])
ax.grid()
θ = np.pi/2
ω = -5
p = 1 * np.exp(1j * θ)
ax.plot([0, p.imag], [0, -p.real], lw=1.8, c='b', zorder=1)
ax.scatter([0, p.imag], [0, -p.real], s=[30, 800], c=['k', 'r'], zorder=2)
# ax.arrow(p.imag, -p.real, 0, -0.2, head_width=0.05, head_length=0.1, fc='k', ec='k', zorder=3)
ωθ = θ + (np.sign(ω) * np.pi)
ωv = ω * np.exp(1j * θ) / 20
ax.arrow(p.imag, -p.real, ωv.real, ωv.imag, head_width=0.01*abs(ω), head_length=0.02*abs(ω), fc='k', ec='k', zorder=3)
θp = 0.5 * np.exp(1j * θ / 2)
ax.text(θp.imag, -θp.real, '%.1f'%(np.rad2deg(θ)), c='g', size=12, ha='center', va='center')
g = 9.81
L = 1
μ = 0.25
dt = 0.0001
θ_init = -np.pi/2 # 起始值
ω_init = 7.2 # 起始值
θ = θ_init
ω = ω_init
s = 15 # 秒數設定
n = int(s / dt)
arr_θ = [θ] # 座標紀錄
arr_ω = [ω] # 座標紀錄
for i in range(n):
α = -g * np.sin(θ) - (μ * ω)
θ += ω * dt
ω += α * dt
arr_θ.append(θ)
arr_ω.append(ω)
arr_θ = np.array(arr_θ)
arr_ω = np.array(arr_ω)
# 擺動示意
p = np.exp(1j * arr_θ) * np.linspace(0.3, 1.0, n+1)
plt.figure(figsize=(6, 6))
ax = plt.axes([0.5, 0.5, 0.9, 0.9])
ax.plot([0, 2*np.exp(1j * θ_init).imag], [0, -2*np.exp(1j * θ_init).real])
ax.plot(p.imag, -p.real, lw=1.8, zorder=1)
ax.axis([-1.2, 1.2, -1.4, 1.2])
# 座標移動軌跡
p = np.exp(1j * arr_θ) * np.linspace(1, 1.5, n+1)
plt.figure(figsize=(10, 5))
ax = plt.axes([0.5, 0.5, 0.9, 0.9])
ax.plot(arr_θ[:], arr_ω[:])
min_θ = -np.pi
max_θ = 5*np.pi
min_ω = -10
# 範圍設置
θ = np.linspace(min_θ, max_θ, 500)
ω = np.linspace(min_ω, -min_ω, 200)
x, y = np.meshgrid(θ, ω) # 網格
u = y # 向量
v = -g * np.sin(x) - (μ * y) # 向量
c = (u**2 + v**2)**0.5 # 顏色設定
# 座標大圖
fig = plt.figure(figsize=(15, 7.5))
ax = plt.axes([0.05, 0.05, 0.9, 0.9])
ax.grid()
ax.axis([min_θ, max_θ, min_ω, -min_ω])
plt.xticks(np.pi*np.arange(-1, 6), ['-π', '0', 'π', '2π', '3π', '4π', '5π'])
ax.streamplot(x, y, u, v, density=1.6, color=c, linewidth=1)
ax.arrow(min_θ, 0, 18.65, 0, head_width=0.5, head_length=0.2, fc='k', ec='k', zorder=3)
ax.arrow(0, min_ω, 0, 19.4, head_width=0.2, head_length=0.6, fc='k', ec='k', zorder=3)
# 擺動小圖
ax2 = plt.axes([0.06, 0.08, 0.2, 0.40])
ax2.axis([-1.2, 1.2, -1.4, 1.2])
plt.xticks([])
plt.yticks([])
fig_n = 0
eles = []
for i in range(0, arr_θ.shape[0], 100):
eles = [] # 各圖形元素放入列表
eles.append(ax.plot(arr_θ[0:i], arr_ω[0:i], c='r', lw=2.5)[0]) # 軌跡紀錄
eles.append(ax.plot(arr_θ[i], arr_ω[i], 'ro')[0]) # 軌跡
# 小圖擺
θ = arr_θ[i]
ω = arr_ω[i]
p = 1 * np.exp(1j * θ)
eles.append(ax2.plot([0, p.imag], [0, -p.real], lw=1.8, c='b', zorder=1)[0])
eles.append(ax2.scatter([0, p.imag], [0, -p.real], s=[30, 500], c=['k', 'r'], zorder=2))
eles.append(ax2.arrow(p.imag, -p.real, 0, -0.2, head_width=0.05, head_length=0.1, fc='k', ec='k', zorder=3))
ωθ = θ + (np.sign(ω) * np.pi)
ωv = ω * np.exp(1j * θ) / 20
eles.append(ax2.arrow(p.imag, -p.real, ωv.real, ωv.imag, head_width=0.01*abs(ω), head_length=0.02*abs(ω), fc='orange', ec='orange', zorder=3))
θp = 0.5 * np.exp(1j * (θ+np.deg2rad(20)))
eles.append(ax2.text(θp.imag, -θp.real, '%.1f'%(np.rad2deg(θ)), c='g', size=12, ha='center', va='center'))
fig.savefig(f'{fig_n}.png')
fig_n += 1
for ele in eles: # 刪除圖表元素在重畫
ele.remove()
del eles
###Output
_____no_output_____ |
04-Text_Input.ipynb | ###Markdown
ipyvuetify Tutorial 04 - Text Input BoxesThis is the fourth in a series of ipyvuetify app development tutorials. If you're just getting started with ipyvuetify and haven't checked out the first tutorial "01 Installation and First Steps.ipynb", be sure to check that one out first.First of all, we'll load the required packages, and test to make sure your environment has all the dependencies set-up successfully:
###Code
from time import sleep
import ipyvuetify as v
import ipywidgets as widgets
import markdown
v.Btn(class_='icon ma-2',
style_='max-width:100px',
color='success',
children=[v.Icon(children=['mdi-check'])])
###Output
_____no_output_____
###Markdown
If you see a green button with a checkmark above, you have successfully installed ipyvuetify and enabled the extension. Good work!If not, refer to the first tutorial and/or the ipyvuetify documentation to set up your system before going further. v.TextField()`v.TextField()` is the primary input type for getting input typed by the user. The bare-bones syntax you might want includes setting something for the `label` and setting the `v_model` to be something (whatever you want for the default).Then, as usual, later on you can set or get the value of the input by the `v_model` attribute
###Code
test = v.TextField(label='Text Input Example', v_model='Default Input')
test
test.v_model
###Output
_____no_output_____
###Markdown
Text Area InputTo input more than a single lines of text, use `Textarea` input
###Code
v.Textarea(v_model="""Default Input
Can have line breakes.
And whatever else you want !""")
###Output
_____no_output_____
###Markdown
Number InputSetting the `type` to 'number' results in a `TextField` that only allows numeric input.Test out the input below - you'll find that only the numbers 0-9 as well as '.' and '-' are allowed:
###Code
v.TextField(label='Label Text',
type='number')
###Output
_____no_output_____
###Markdown
Time Input
###Code
v.TextField(v_model='12:30:00',
label='Time Input Example',
type='time')
v.TextField(v_model='12:30:00',
label='Time Input Example (with seconds)',
type='time-with-seconds')
###Output
_____no_output_____
###Markdown
Date Input
###Code
v.TextField(v_model='2020-05-01',
label='Date Input Example',
type='date')
###Output
_____no_output_____
###Markdown
v.TextField StylesThere are many options for styling text input fields.Here are some examples of a few of the key options.Be sure to check out the [ipyvuetify documentation](https://vuetifyjs.com/en/components/text-fields/) for more detail.
###Code
v.TextField(label='Text Input Example with default style',
v_model='Default Input')
v.TextField(label='Text Input Example with "solo" style', solo=True)
v.TextField(
label='Text Input Example with "single_line" style and "warning" colour',
single_line=True,
color='warning')
v.TextField(placeholder='This text input has no styling at all',
v_model=None,
filled=True)
v.TextField(
placeholder=
'This text input has "dense" styling. It takes up less vertical space.',
v_model=None,
dense=True)
v.TextField(placeholder='This text input has "solo" and "flat" styling',
v_model=None,
solo=True,
flat=True)
v.TextField(
placeholder='This text input has a background fill and rounded shape',
v_model=None,
filled=True,
rounded=True)
v.TextField(
placeholder='This text input has a background fill and fancy shape',
v_model=None,
filled=True,
shaped=True)
###Output
_____no_output_____
###Markdown
Disabled/Readonly`TextField`'s can be disabled, or readonly, which might come in handy.
###Code
v.TextField(placeholder='Readonly text field',
v_model="Can't change me !",
readonly=True)
v.TextField(placeholder='Disabled text field',
v_model="Can't change me !",
disabled=True)
###Output
_____no_output_____
###Markdown
In fact, all the input widgets's we've seen so far can be set to `disabled` or `readonly`. Adding IconsAs we saw in the last tutorial, you can prepend and/or append icons to this input.You can control the position of the icon relative to the field by using `prepend_inner_icon` instead of `prepend_icon`, or by using `append_outer_icon` instead of `append_icon`.But the default is sensible, and will suit nicely in most situations.
###Code
v.Row(children=[
v.TextField(class_='pa-4',
v_model = 'Text Field with prepended/appended icons',
label='prepended and appended',
prepend_icon='mdi-recycle',
append_icon='mdi-trash-can'),
v.Spacer()
])
v.Row(children=[
v.TextField(class_='pa-4',
v_model = 'Text Field with prepended/appended icons',
label='prepended_inner and appended_outer',
prepend_inner_icon='mdi-recycle',
append_outer_icon='mdi-trash-can'),
v.Spacer()
])
###Output
_____no_output_____
###Markdown
ClearableThe `clearable` argument gives a nice icon that can be pressed to clear the input of the text field.You can customize this icon, if you want.
###Code
v.TextField(class_='pa-4',
clearable=True,
v_model = 'Clearable Text Field')
v.TextField(class_='pa-4',
clearable=True,
clear_icon='mdi-trash-can',
v_model = 'Clearable Text Field')
###Output
_____no_output_____
###Markdown
Character countThe `counter` argument provides a slick way to let users know how many characters they've entered in the text field.If you set the `counter` argument to be an integer, it will display a nice `n / N` display to show that the user has typed `n` characters out of a possible `N`.Note that `ipuvuetify` does not do any validation of this limit - the user can input more than this limit and `vuetify` is happy to let them. *Validation has to be done in the python application.* We will see examples of this in future tutorials.
###Code
v.TextField(counter=100,v_model='Text input with counter')
v.TextField(counter=10,v_model='Text input with the counter limit exceeded')
###Output
_____no_output_____
###Markdown
Password InputIf you set `type` to 'password', the characters will be hidden
###Code
v.TextField(v_model='MySecretPassword',
type='password',
counter=True)
###Output
_____no_output_____
###Markdown
Prefixes and Suffixes(Borrowed verbatim from (the vuetify.js documentation)[https://vuetifyjs.com/en/components/text-fields/prefixes-suffixes])The prefix and suffix properties allows you to prepend and append inline non-modifiable text next to the text field. Prefix
###Code
v.TextField(v_model='10.00',
label='Dollars',
prefix='$')
###Output
_____no_output_____
###Markdown
Suffix for weight
###Code
v.TextField(v_model='3',
label='Xanthan Gum',
suffix='Tbsp')
###Output
_____no_output_____
###Markdown
Suffix for domain
###Code
v.TextField(v_model='radinplaid',
label='Username',
readonly=True,
suffix='@github.com')
###Output
_____no_output_____
###Markdown
Time Zone Input
###Code
v.TextField(v_model='12:30:00',
label='Label Text',
type='time',
suffix='EST')
###Output
_____no_output_____ |
Experiments/Crawling/Jupyter Notebooks/Maria-Iuliana Bocicor.ipynb | ###Markdown
Manual publication DB insertion from raw text using syntax features Publications and conferences of Dr. BOCICOR Maria Iuliana, Profesor Universitar http://www.cs.ubbcluj.ro/~iuliana
###Code
class HelperMethods:
@staticmethod
def IsDate(text):
# print("text")
# print(text)
for c in text.lstrip():
if c not in "1234567890 ":
return False
return True
import pandas
import requests
page = requests.get('https://sites.google.com/view/iuliana-bocicor/research/publications')
data = page.text
from bs4 import BeautifulSoup
soup = BeautifulSoup(data)
def GetPublicationData_Number(text):
title = text.split(',')[0].split('.')[1]
try:
date = [k.lstrip() for k in text.split(',') if HelperMethods.IsDate(k.lstrip())][0]
except:
date = ""
return title, "", date
import re
def GetCoAuthorData(text):
# print(text)
val = re.search('\"[a-zA-Z ]+\"', text)
title = val.group(0)
val = re.search('Authors: [a-zA-Z,-. ]+ (?=Pages)', text)
authors = val.group(0)
# print(authors)
return title, authors, ""
def GetPublicationData_A(text):
print(text)
print()
text = text.replace("M. ", "")
authors = text.split('.')[0]
print("authors: ", authors)
title = text.split('.')[1].lstrip(' \"')
print("title: ", title)
try:
val = re.search('(19|20)[0-9]{2}\.', text)
date = val.group(0).rstrip('.')
except:
date = ""
print()
return title, authors, date
pubs = []
# print(soup.find_all('div'))
for e in soup.find_all('div'):
if "class" in e.attrs:
if e.attrs["class"] == ["tyJCtd", "mGzaTb", "baZpAe"]:
# for every pub entry
for c in e.find_all("p", attrs={"class": "zfr3Q"}):
if c.text == "":
continue
if "co-author" in c.text:
rval = GetCoAuthorData(c.text)
else:
features = c.text.split('.')
if features[0].isdecimal():
rval = GetPublicationData_Number(c.text)
else:
rval = GetPublicationData_A(c.text)
pubs.append(rval)
for pub in pubs:
print(pub)
print("Count: ", len(pubs))
###Output
Count: 46
###Markdown
DB Storage (TODO)Time to store the entries in the `papers` DB table. ![Screenshot](Images/PapersTableSpec.PNG)
###Code
import mariadb
import json
with open('../credentials.json', 'r') as crd_json_fd:
json_text = crd_json_fd.read()
json_obj = json.loads(json_text)
credentials = json_obj["Credentials"]
username = credentials["username"]
password = credentials["password"]
table_name = "publications_cache"
db_name = "ubbcluj"
print(table_name)
mariadb_connection = mariadb.connect(user=username, password=password, database=db_name)
mariadb_cursor = mariadb_connection.cursor()
for paper in pubs:
title = ""
pub_date = ""
authors = ""
try:
pub_date = paper[2].lstrip()
pub_date = str(pub_date) + "-01-01"
if len(pub_date) != 10:
pub_date = ""
except:
pass
try:
title = paper[0].lstrip()
except:
pass
try:
authors = paper[1].lstrip()
except AttributeError:
pass
insert_string = "INSERT INTO {0} SET ".format(table_name)
insert_string += "Title=\'{0}\', ".format(title)
insert_string += "ProfessorId=\'{0}\', ".format(7)
if pub_date != "":
insert_string += "PublicationDate=\'{0}\', ".format(str(pub_date))
insert_string += "Authors=\'{0}\', ".format(authors)
insert_string += "Affiliations=\'{0}\' ".format("")
print(insert_string)
try:
mariadb_cursor.execute(insert_string)
except mariadb.ProgrammingError as pe:
print("Error")
raise pe
except mariadb.IntegrityError:
continue
mariadb_connection.close()
###Output
_____no_output_____ |
Python/AbsoluteAndOtherAlgorithms/7GLIOMA/NDFS_64.ipynb | ###Markdown
1. Import libraries
###Code
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import scipy.io
from keras.utils import to_categorical
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import scipy.io
from skfeature.function.sparse_learning_based import NDFS
from skfeature.utility import construct_W
from skfeature.utility.sparse_learning import feature_ranking
import time
import pandas as pd
#--------------------------------------------------------------------------------------------------------------------------------
def ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed):
clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed)
# Training
clf.fit(p_train_feature, p_train_label)
# Training accuracy
print('Training accuracy:',clf.score(p_train_feature, np.array(p_train_label)))
print('Training accuracy:',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature)))
#print('Training accuracy:',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0])
# Testing accuracy
print('Testing accuracy:',clf.score(p_test_feature, np.array(p_test_label)))
print('Testing accuracy:',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature)))
#print('Testing accuracy:',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0])
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
###Output
_____no_output_____
###Markdown
2. Loading data
###Code
data_path="./Dataset/GLIOMA.mat"
Data = scipy.io.loadmat(data_path)
data_arr=Data['X']
label_arr=Data['Y'][:, 0]-1
Data=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)
C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(Data,label_arr,test_size=0.2,random_state=seed)
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
key_feture_number=64
###Output
_____no_output_____
###Markdown
3. Classifying 1 Extra Trees
###Code
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
num_cluster=len(np.unique(label_arr))
###Output
_____no_output_____
###Markdown
4. Model
###Code
start = time.clock()
# construct affinity matrix
kwargs_W = {"metric": "euclidean", "neighborMode": "knn", "weightMode": "heatKernel", "k": 5, 't': 1}
train_W = construct_W.construct_W(train_feature, **kwargs_W)
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
train_score = NDFS.ndfs(train_feature, W=train_W,n_clusters=num_cluster)
train_idx = feature_ranking(train_score)
# obtain the dataset on the selected features
train_selected_x = train_feature[:, train_idx[0:key_feture_number]]
print("train_selected_x",train_selected_x.shape)
test_W = construct_W.construct_W(test_feature, **kwargs_W)
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
test_score = NDFS.ndfs(test_feature, W=test_W,n_clusters=num_cluster)
test_idx = feature_ranking(test_score)
# obtain the dataset on the selected features
test_selected_x = test_feature[:, test_idx[0:key_feture_number]]
print("test_selected_x",test_selected_x.shape)
time_cost=time.clock() - start
write_to_csv(np.array([time_cost]),"./log/NDFS_time"+str(key_feture_number)+".csv")
###Output
/usr/local/lib/python3.7/site-packages/ipykernel_launcher.py:1: DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead
"""Entry point for launching an IPython kernel.
/usr/local/lib/python3.7/site-packages/sklearn/cluster/_kmeans.py:934: FutureWarning: 'precompute_distances' was deprecated in version 0.23 and will be removed in 0.25. It has no effect
"effect", FutureWarning)
/usr/local/lib/python3.7/site-packages/sklearn/cluster/_kmeans.py:939: FutureWarning: 'n_jobs' was deprecated in version 0.23 and will be removed in 0.25.
" removed in 0.25.", FutureWarning)
###Markdown
5. Classifying 2 Extra Trees
###Code
train_feature=train_selected_x
train_label=C_train_y
test_feature=test_selected_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
Shape of train_feature: (40, 64)
Shape of train_label: (40,)
Shape of test_feature: (10, 64)
Shape of test_label: (10,)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.4
Testing accuracy: 0.4
###Markdown
6. Reconstruction loss
###Code
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
train_feature_tuple=(train_selected_x,C_train_x)
test_feature_tuple=(test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
###Output
0.40368877967904
|
notebooks/EDA/EDA_aida-conll-yago.ipynb | ###Markdown
aida-conll-yago-dataset Data DescriptionA dataset for named entity recognition and disambiguation (NERD), > File Format> ----------->> The format of the final file is the following:>> - Each document starts with a line: -DOCSTART- ()> - Each following line represents a single token, sentences are separated by an empty line> > Lines with tabs are tokens the are part of a mention:> - column 1 is the token> - column 2 is either B (beginning of a mention) or I (continuation of a mention)> - column 3 is the full mention used to find entity candidates> - column 4 is the corresponding YAGO2 entity (in YAGO encoding, i.e. unicode characters are backslash encoded and spaces are replaced by underscores, see also the tools on the YAGO2 website), OR --NME--, denoting that there is no matching entity in YAGO2 for this particular mention, or that we are missing the connection between the mention string and the YAGO2 entity.> - column 5 is the corresponding Wikipedia URL of the entity (added for convenience when evaluating against a Wikipedia based method)> - column 6 is the corresponding Wikipedia ID of the entity (added for convenience when evaluating against a Wikipedia based method - the ID refers to the dump used for annotation, 2010-08-17)> - column 7 is the corresponding Freebase mid, if there is one (thanks to Massimiliano Ciaramita from Google Zürich for creating the mapping and making it available to us)
###Code
import csv
# df_acy = dd.read_csv('../../aida-conll-yago-dataset/AIDA-YAGO2-DATASET.tsv', sep='\t',dtype='object').compute()
# res = df.infer_objects()
tsv_file = open('../../data/aida-conll-yago-dataset/AIDA-YAGO2-DATASET.tsv')
read_tsv = csv.reader(tsv_file, delimiter="\t")
df = []
for row in read_tsv:
df.append(row)
len(df[1])
###Output
_____no_output_____
###Markdown
**Note:** `wikipedia_ID` in ACY corresponds to `page_id` in KWNLP.
###Code
acy_df = pd.DataFrame(data = df[1:])
new = ['token', 'mention', 'full_mention', 'YAGO2', 'wikipedia_URL', 'wikipedia_ID', 'freebase']
acy_df = acy_df.rename(columns = dict(zip(range(7), new)))
acy_df.head(50)
# Display dataframe with only full_mention values != None
acy_df[acy_df['full_mention'].notna()]
# Display dataframe with only full_mention values != None
acy_df[acy_df['full_mention'].isna()]
len(acy_df)
print('{:.2f}% of them had a full mention matched'.format(sum([i!=None for i in acy_df.iloc[:, 2]])/len(acy_df)*100))
print('{:.2f}% of them had a yago2 entity matched'.format(sum([(i!=None and i!='--NME--') for i in acy_df.iloc[:, 3]])/len(acy_df)*100))
print('{:.2f}% of them had a yago2 entity matched'.format(sum([(i!=None and i!='--NME--') for i in acy_df.iloc[:, 3]])/len(acy_df)*100))
print('{:.2f}% of them had a wikipedia page matched'.format(sum([i!=None for i in acy_df.iloc[:, 4]])/len(acy_df)*100))
print('{:.2f}% of them had a freebase mid matched'.format(sum([i!=None for i in acy_df.iloc[:, 6]])/len(acy_df)*100))
###Output
12.60% of them had a freebase mid matched
|
experiments/tw-on-uniform/2/tw_template.ipynb | ###Markdown
Imports
###Code
#packages
import numpy
import tensorflow as tf
from tensorflow.core.example import example_pb2
#utils
import os
import random
import pickle
import struct
import time
from generators import *
#keras
import keras
from keras.preprocessing import text, sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Model, Sequential
from keras.models import load_model
from keras.layers import Dense, Dropout, Activation, Concatenate, Dot, Embedding, LSTM, Conv1D, MaxPooling1D, Input, Lambda
#callbacks
from keras.callbacks import TensorBoard, ModelCheckpoint, Callback
###Output
Using TensorFlow backend.
###Markdown
Seeding
###Code
sd = 2
from numpy.random import seed
seed(sd)
from tensorflow import set_random_seed
set_random_seed(sd)
###Output
_____no_output_____
###Markdown
CPU usage
###Code
#os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
#os.environ["CUDA_VISIBLE_DEVICES"] = ""
###Output
_____no_output_____
###Markdown
Global parameters
###Code
# Embedding
max_features = 400000
maxlen_text = 400
maxlen_summ = 80
embedding_size = 100 #128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 32
epochs = 3
###Output
_____no_output_____
###Markdown
Load data
###Code
data_dir = '/mnt/disks/500gb/experimental-data-mini/experimental-data-mini/pseudorandom-dist-1to1/1to1/'
processing_dir = '/mnt/disks/500gb/stats-and-meta-data/400000/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
with open(processing_dir+'tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle)
embedding_matrix = numpy.load(processing_dir+'embedding_matrix.npy')
#the p_n constant
c = 80000
#stats
maxi = numpy.load(processing_dir+'training-stats-all/maxi.npy')
mini = numpy.load(processing_dir+'training-stats-all/mini.npy')
sample_info = (numpy.random.uniform, mini,maxi)
###Output
_____no_output_____
###Markdown
Model
###Code
#2way input
text_input = Input(shape=(maxlen_text,embedding_size), dtype='float32')
summ_input = Input(shape=(maxlen_summ,embedding_size), dtype='float32')
#2way dropout
text_route = Dropout(0.25)(text_input)
summ_route = Dropout(0.25)(summ_input)
#2way conv
text_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(text_route)
summ_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(summ_route)
#2way max pool
text_route = MaxPooling1D(pool_size=pool_size)(text_route)
summ_route = MaxPooling1D(pool_size=pool_size)(summ_route)
#2way lstm
text_route = LSTM(lstm_output_size)(text_route)
summ_route = LSTM(lstm_output_size)(summ_route)
#get dot of both routes
merged = Dot(axes=1,normalize=True)([text_route, summ_route])
#negate results
#merged = Lambda(lambda x: -1*x)(merged)
#add p_n constant
#merged = Lambda(lambda x: x + c)(merged)
#output
output = Dense(1, activation='sigmoid')(merged)
#define model
model = Model(inputs=[text_input, summ_input], outputs=[output])
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train model
###Code
#callbacks
class BatchHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.accs = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.accs.append(logs.get('acc'))
history = BatchHistory()
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=batch_size, write_graph=True, write_grads=True)
modelcheckpoint = ModelCheckpoint('best.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='min', period=1)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': True,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':sample_info}
#generators
training_generator = TwoQuartGenerator(partition['train'], labels, **params)
validation_generator = TwoQuartGenerator(partition['validation'], labels, **params)
# Train model on dataset
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=5,
epochs=epochs,
callbacks=[tensorboard, modelcheckpoint, history])
with open('losses.pickle', 'wb') as handle: pickle.dump(history.losses, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('accs.pickle', 'wb') as handle: pickle.dump(history.accs, handle, protocol=pickle.HIGHEST_PROTOCOL)
###Output
_____no_output_____ |
docs/tutorials/benchmarking.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. BenchmarkingThis tutorial benchmarks the performance of various sampling strategies, with and without caching.It's recommended to run this notebook on Google Colab if you don't have your own GPU. Click the "Open in Colab" button above to get started. SetupFirst, we install TorchGeo.TODO: this should be updated to use `pip install torchgeo` once we release on PyPI.
###Code
import os
import sys
sys.path.append(os.path.join("..", ".."))
###Output
_____no_output_____
###Markdown
ImportsNext, we import TorchGeo and any other libraries we need.
###Code
import tempfile
import time
from typing import Tuple
from torch.utils.data import DataLoader
from torchgeo.datasets import NAIP, ChesapeakeDE
from torchgeo.datasets.utils import download_url
from torchgeo.models import FCN
from torchgeo.samplers import RandomGeoSampler, GridGeoSampler, RandomBatchGeoSampler
###Output
_____no_output_____
###Markdown
DatasetsFor this tutorial, we'll be using imagery from the [National Agriculture Imagery Program (NAIP)](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) and labels from the [Chesapeake Bay High-Resolution Land Cover Project](https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/land-cover-data-project/). First, we manually download a few NAIP tiles.
###Code
data_root = tempfile.gettempdir()
naip_root = os.path.join(data_root, "naip")
naip_url = "https://naipblobs.blob.core.windows.net/naip/v002/de/2018/de_060cm_2018/38075/"
tiles = [
"m_3807511_ne_18_060_20181104.tif",
"m_3807511_se_18_060_20181104.tif",
"m_3807512_nw_18_060_20180815.tif",
"m_3807512_sw_18_060_20180815.tif",
]
for tile in tiles:
download_url(naip_url + tile, naip_root)
###Output
_____no_output_____
###Markdown
Next, we tell TorchGeo to automatically download the corresponding Chesapeake labels.
###Code
chesapeake_root = os.path.join(data_root, "chesapeake")
chesapeake = ChesapeakeDE(chesapeake_root, download=True)
###Output
_____no_output_____
###Markdown
Timing function
###Code
def time_epoch(dataloader: DataLoader) -> Tuple[float, int]:
tic = time.time()
i = 0
for _ in dataloader:
i += 1
toc = time.time()
return toc - tic, i
###Output
_____no_output_____
###Markdown
RandomGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomGeoSampler(naip.index, size=1000, length=888)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
296.582683801651 74
54.20210099220276 74
###Markdown
GridGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = GridGeoSampler(naip.index, size=1000, stride=500)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
391.90197944641113 74
118.0611424446106 74
###Markdown
RandomBatchGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomBatchGeoSampler(naip.index, size=1000, batch_size=12, length=888)
dataloader = DataLoader(dataset, batch_sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
230.51380324363708 74
53.99923872947693 74
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. BenchmarkingThis tutorial benchmarks the performance of various sampling strategies, with and without caching.It's recommended to run this notebook on Google Colab if you don't have your own GPU. Click the "Open in Colab" button above to get started. SetupFirst, we install TorchGeo.
###Code
%pip install torchgeo
###Output
_____no_output_____
###Markdown
ImportsNext, we import TorchGeo and any other libraries we need.
###Code
import os
import tempfile
import time
from typing import Tuple
from torch.utils.data import DataLoader
from torchgeo.datasets import NAIP, ChesapeakeDE
from torchgeo.datasets.utils import download_url
from torchgeo.samplers import RandomGeoSampler, GridGeoSampler, RandomBatchGeoSampler
###Output
_____no_output_____
###Markdown
DatasetsFor this tutorial, we'll be using imagery from the [National Agriculture Imagery Program (NAIP)](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) and labels from the [Chesapeake Bay High-Resolution Land Cover Project](https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/land-cover-data-project/). First, we manually download a few NAIP tiles.
###Code
data_root = tempfile.gettempdir()
naip_root = os.path.join(data_root, "naip")
naip_url = "https://naipblobs.blob.core.windows.net/naip/v002/de/2018/de_060cm_2018/38075/"
tiles = [
"m_3807511_ne_18_060_20181104.tif",
"m_3807511_se_18_060_20181104.tif",
"m_3807512_nw_18_060_20180815.tif",
"m_3807512_sw_18_060_20180815.tif",
]
for tile in tiles:
download_url(naip_url + tile, naip_root)
###Output
_____no_output_____
###Markdown
Next, we tell TorchGeo to automatically download the corresponding Chesapeake labels.
###Code
chesapeake_root = os.path.join(data_root, "chesapeake")
chesapeake = ChesapeakeDE(chesapeake_root, download=True)
###Output
_____no_output_____
###Markdown
Timing function
###Code
def time_epoch(dataloader: DataLoader) -> Tuple[float, int]:
tic = time.time()
i = 0
for _ in dataloader:
i += 1
toc = time.time()
return toc - tic, i
###Output
_____no_output_____
###Markdown
RandomGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomGeoSampler(naip, size=1000, length=888)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
296.582683801651 74
54.20210099220276 74
###Markdown
GridGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = GridGeoSampler(naip, size=1000, stride=500)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
391.90197944641113 74
118.0611424446106 74
###Markdown
RandomBatchGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomBatchGeoSampler(naip, size=1000, batch_size=12, length=888)
dataloader = DataLoader(dataset, batch_sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
230.51380324363708 74
53.99923872947693 74
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. BenchmarkingThis tutorial benchmarks the performance of various sampling strategies, with and without caching.It's recommended to run this notebook on Google Colab if you don't have your own GPU. Click the "Open in Colab" button above to get started. SetupFirst, we install TorchGeo.
###Code
%pip install torchgeo
###Output
_____no_output_____
###Markdown
ImportsNext, we import TorchGeo and any other libraries we need.
###Code
import os
import tempfile
import time
from typing import Tuple
from torch.utils.data import DataLoader
from torchgeo.datasets import NAIP, ChesapeakeDE
from torchgeo.datasets.utils import download_url, stack_samples
from torchgeo.samplers import RandomGeoSampler, GridGeoSampler, RandomBatchGeoSampler
###Output
_____no_output_____
###Markdown
DatasetsFor this tutorial, we'll be using imagery from the [National Agriculture Imagery Program (NAIP)](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) and labels from the [Chesapeake Bay High-Resolution Land Cover Project](https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/land-cover-data-project/). First, we manually download a few NAIP tiles.
###Code
data_root = tempfile.gettempdir()
naip_root = os.path.join(data_root, "naip")
naip_url = "https://naipeuwest.blob.core.windows.net/naip/v002/de/2018/de_060cm_2018/38075/"
tiles = [
"m_3807511_ne_18_060_20181104.tif",
"m_3807511_se_18_060_20181104.tif",
"m_3807512_nw_18_060_20180815.tif",
"m_3807512_sw_18_060_20180815.tif",
]
for tile in tiles:
download_url(naip_url + tile, naip_root)
###Output
_____no_output_____
###Markdown
Next, we tell TorchGeo to automatically download the corresponding Chesapeake labels.
###Code
chesapeake_root = os.path.join(data_root, "chesapeake")
chesapeake = ChesapeakeDE(chesapeake_root, download=True)
###Output
_____no_output_____
###Markdown
Timing function
###Code
def time_epoch(dataloader: DataLoader) -> Tuple[float, int]:
tic = time.time()
i = 0
for _ in dataloader:
i += 1
toc = time.time()
return toc - tic, i
###Output
_____no_output_____
###Markdown
RandomGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = RandomGeoSampler(naip, size=1000, length=888)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
296.582683801651 74
54.20210099220276 74
###Markdown
GridGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = GridGeoSampler(naip, size=1000, stride=500)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
391.90197944641113 74
118.0611424446106 74
###Markdown
RandomBatchGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = RandomBatchGeoSampler(naip, size=1000, batch_size=12, length=888)
dataloader = DataLoader(dataset, batch_sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
230.51380324363708 74
53.99923872947693 74
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. BenchmarkingThis tutorial benchmarks the performance of various sampling strategies, with and without caching.It's recommended to run this notebook on Google Colab if you don't have your own GPU. Click the "Open in Colab" button above to get started. SetupFirst, we install TorchGeo.
###Code
%pip install torchgeo
###Output
_____no_output_____
###Markdown
ImportsNext, we import TorchGeo and any other libraries we need.
###Code
import os
import tempfile
import time
from typing import Tuple
from torch.utils.data import DataLoader
from torchgeo.datasets import NAIP, ChesapeakeDE
from torchgeo.datasets.utils import download_url
from torchgeo.models import FCN
from torchgeo.samplers import RandomGeoSampler, GridGeoSampler, RandomBatchGeoSampler
###Output
_____no_output_____
###Markdown
DatasetsFor this tutorial, we'll be using imagery from the [National Agriculture Imagery Program (NAIP)](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) and labels from the [Chesapeake Bay High-Resolution Land Cover Project](https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/land-cover-data-project/). First, we manually download a few NAIP tiles.
###Code
data_root = tempfile.gettempdir()
naip_root = os.path.join(data_root, "naip")
naip_url = "https://naipblobs.blob.core.windows.net/naip/v002/de/2018/de_060cm_2018/38075/"
tiles = [
"m_3807511_ne_18_060_20181104.tif",
"m_3807511_se_18_060_20181104.tif",
"m_3807512_nw_18_060_20180815.tif",
"m_3807512_sw_18_060_20180815.tif",
]
for tile in tiles:
download_url(naip_url + tile, naip_root)
###Output
_____no_output_____
###Markdown
Next, we tell TorchGeo to automatically download the corresponding Chesapeake labels.
###Code
chesapeake_root = os.path.join(data_root, "chesapeake")
chesapeake = ChesapeakeDE(chesapeake_root, download=True)
###Output
_____no_output_____
###Markdown
Timing function
###Code
def time_epoch(dataloader: DataLoader) -> Tuple[float, int]:
tic = time.time()
i = 0
for _ in dataloader:
i += 1
toc = time.time()
return toc - tic, i
###Output
_____no_output_____
###Markdown
RandomGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomGeoSampler(naip, size=1000, length=888)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
296.582683801651 74
54.20210099220276 74
###Markdown
GridGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = GridGeoSampler(naip, size=1000, stride=500)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
391.90197944641113 74
118.0611424446106 74
###Markdown
RandomBatchGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake + naip
sampler = RandomBatchGeoSampler(naip, size=1000, batch_size=12, length=888)
dataloader = DataLoader(dataset, batch_sampler=sampler)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
230.51380324363708 74
53.99923872947693 74
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. BenchmarkingThis tutorial benchmarks the performance of various sampling strategies, with and without caching.It's recommended to run this notebook on Google Colab if you don't have your own GPU. Click the "Open in Colab" button above to get started. SetupFirst, we install TorchGeo.
###Code
%pip install torchgeo
###Output
_____no_output_____
###Markdown
ImportsNext, we import TorchGeo and any other libraries we need.
###Code
import os
import tempfile
import time
from typing import Tuple
from torch.utils.data import DataLoader
from torchgeo.datasets import NAIP, ChesapeakeDE
from torchgeo.datasets.utils import download_url, stack_samples
from torchgeo.samplers import RandomGeoSampler, GridGeoSampler, RandomBatchGeoSampler
###Output
_____no_output_____
###Markdown
DatasetsFor this tutorial, we'll be using imagery from the [National Agriculture Imagery Program (NAIP)](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) and labels from the [Chesapeake Bay High-Resolution Land Cover Project](https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/land-cover-data-project/). First, we manually download a few NAIP tiles.
###Code
data_root = tempfile.gettempdir()
naip_root = os.path.join(data_root, "naip")
naip_url = "https://naipblobs.blob.core.windows.net/naip/v002/de/2018/de_060cm_2018/38075/"
tiles = [
"m_3807511_ne_18_060_20181104.tif",
"m_3807511_se_18_060_20181104.tif",
"m_3807512_nw_18_060_20180815.tif",
"m_3807512_sw_18_060_20180815.tif",
]
for tile in tiles:
download_url(naip_url + tile, naip_root)
###Output
_____no_output_____
###Markdown
Next, we tell TorchGeo to automatically download the corresponding Chesapeake labels.
###Code
chesapeake_root = os.path.join(data_root, "chesapeake")
chesapeake = ChesapeakeDE(chesapeake_root, download=True)
###Output
_____no_output_____
###Markdown
Timing function
###Code
def time_epoch(dataloader: DataLoader) -> Tuple[float, int]:
tic = time.time()
i = 0
for _ in dataloader:
i += 1
toc = time.time()
return toc - tic, i
###Output
_____no_output_____
###Markdown
RandomGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = RandomGeoSampler(naip, size=1000, length=888)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
296.582683801651 74
54.20210099220276 74
###Markdown
GridGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = GridGeoSampler(naip, size=1000, stride=500)
dataloader = DataLoader(dataset, batch_size=12, sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
391.90197944641113 74
118.0611424446106 74
###Markdown
RandomBatchGeoSampler
###Code
for cache in [False, True]:
chesapeake = ChesapeakeDE(chesapeake_root, cache=cache)
naip = NAIP(naip_root, crs=chesapeake.crs, res=chesapeake.res, cache=cache)
dataset = chesapeake & naip
sampler = RandomBatchGeoSampler(naip, size=1000, batch_size=12, length=888)
dataloader = DataLoader(dataset, batch_sampler=sampler, collate_fn=stack_samples)
duration, count = time_epoch(dataloader)
print(duration, count)
###Output
230.51380324363708 74
53.99923872947693 74
|
notebooks/2.2 Tidy data.ipynb | ###Markdown
2.2 Tidy data In this notebook, we take a closer look at the BL books dataset and tidy it up.
###Code
# imports
import pandas as pd
import json, os, codecs
from collections import defaultdict, OrderedDict
import seaborn as sns
###Output
_____no_output_____
###Markdown
Import the datasetLet us import the sample dataset in memory, as it is, without transformations. We rely on some Python libraries to do so.
###Code
root_folder = "../data/bl_books/sample/"
# metadata
filename = "book_data_sample.json"
metadata = json.load(codecs.open(os.path.join(root_folder,filename), encoding="utf8"))
# fulltexts
foldername = "full_texts"
texts = defaultdict(list)
for root, dirs, files in os.walk(os.path.join(root_folder,foldername)):
for f in files:
if ".json" in f:
t = json.load(codecs.open(os.path.join(root,f), encoding="utf8"))
texts[f] = t
# enriched metadata
filename = "extra_metadata_sample.csv"
df_extra = pd.read_csv(os.path.join(root_folder,filename), delimiter=";")
df_extra = df_extra.rename(str.lower, axis='columns') # rename columns to lower case
###Output
_____no_output_____
###Markdown
Take a lookLet's take a look to the dataset
###Code
# there are 452 books in the sample
print(len(metadata))
# each one contains the following catalog metadata
metadata[0]
###Output
_____no_output_____
###Markdown
**Questions** on 'metadata':* Can you identify some messy aspects of this dataset representation?* Take a look at the 'shelfmarks' or 'title' fields: what is the problem here?* Do the same for the 'authors' and 'pdf' fields: what is the problem?* Look at the datefield of the *third* item in this list: is there a problem?
###Code
# let's check we have the same amount of books with a text file
print(len(texts))
###Output
452
###Markdown
*Note: we have selected for the sample just the first volume/pdf for every book.*
###Code
# each text comes as a list of lists: one per page, as follows [page number, text]
texts['000000196_01_text.json'][:9]
# the extra metadata can be already used as a data frame
df_extra[df_extra["first_pdf"] == "lsidyv35c55757"]
# we perform here a selection of rows which abide to a given condition. We'll see more of this later on.
###Output
_____no_output_____
###Markdown
**Question**: explore this data frame and find examples of messy aspects.
###Code
# Create data frames for all datasets
# We drop some variables we don't need at this stage
# metadata
datefield = list() # '1841'
publisher = list() # 'Privately printed',
title = list() # ["The Poetical Aviary, with a bird's-eye view of the English poets. [The preface signed: A. A.] Ms. notes"]
edition = list() # ''
place = list() # 'Calcutta'
issuance = list() # 'monographic'
authors = list() # {'creator': ['A. A.']}
first_pdf = list() # {'1': 'lsidyv35c55757'}
number_volumes = list()
identifier = list() # '000000196'
fulltext_filename = list() # 'sample/full_texts/000000196_01_text.json'
for book in metadata:
if book["date"]:
datefield.append(int(book["date"][:4]))
else:
datefield.append(None)
publisher.append(book["publisher"])
title.append(book["title"][0])
edition.append(book["edition"])
place.append(book["place"])
issuance.append(book["issuance"])
if "creator" in book["authors"].keys():
authors.append(book["authors"]["creator"]) # this is a list!
else:
authors.append([''])
first_pdf.append(book["pdf"]["1"])
number_volumes.append(len(book["pdf"]))
identifier.append(book["identifier"])
fulltext_filename.append(book["fulltext_filename"].split("/")[-1])
df_meta = pd.DataFrame.from_dict({"datefield": datefield, "publisher": publisher,
"title": title, "edition": edition, "place": place,
"issuance": issuance, "authors": authors, "first_pdf": first_pdf,
"number_volumes": number_volumes, "identifier": identifier,
"fulltext_filename": fulltext_filename})
# texts
how_many_pages = 50 # we reduce the amount of text to the first n pages, to make it faster to play with it
fulltext_filename = list()
fulltext = list()
for f,t in texts.items():
fulltext_filename.append(f)
text = " ".join([line[1] for line in t][:how_many_pages])
fulltext.append(text)
df_texts = pd.DataFrame.from_dict({"fulltext_filename": fulltext_filename, "fulltext": fulltext})
df_meta.head(5)
df_texts.head(1)
###Output
_____no_output_____
###Markdown
A second lookLet's check data types and typical values to be sure of what we have.
###Code
df_meta.dtypes
df_meta["datefield"].hist(bins=20)
df_meta["number_volumes"].hist()
variable = "place"
df_meta[variable].value_counts()[:11]
df_meta[variable].value_counts()[-10:]
df_extra["genre"].value_counts()
df_extra["type"].value_counts()
###Output
_____no_output_____
###Markdown
**Questions**: * 'place' seems like a reasonably uniform variable. Try with 'edition' instead, and think about how that might be more problematic.* While the 'genre' variable is uniform in representation, the 'type' variable is not. Can you find out what are the most well-represented types for each genre category? How might this influence our use of 'type' for analysis? UML modellingFrom an Entity-Relationship model to a relational model (tidy data).UML: Unified Modelling Language. A visual design language to go about modelling systems, including data. https://en.wikipedia.org/wiki/Unified_Modeling_Language
###Code
df_meta.head(1)
df_texts[df_texts["fulltext_filename"] == '000000196_01_text.json']
df_extra[df_extra["first_pdf"] == "lsidyv35c55757"]
###Output
_____no_output_____
###Markdown
**Now we switch to the blackboard and model!** Tidy dataset: relational-model* Full view (for your curiosity): https://dbdiagram.io/d/5d06a4adfff7633dfc8e3a42* Reduced view (we here use this one): https://dbdiagram.io/d/5d06a5d0fff7633dfc8e3a47
###Code
# first, join the extra metadata genre column to the metadata data frame. More details on joins in class 3.1.
df_extra_genre = df_extra[["type","genre","first_pdf"]]
df_book = df_meta.join(df_extra_genre.set_index('first_pdf'), on='first_pdf')
df_book.head(1)
# second, add the book_id to the book_text dataframe
df_book_text = df_texts.join(df_book[["identifier","fulltext_filename"]].set_index('fulltext_filename'), on='fulltext_filename')
df_book_text = df_book_text.rename(columns={"identifier":"book_id"})
df_book_text.head(3)
# third, pull our author information and create the author table and the author-book table
author_id = 0 # this is a counter which provides for a distinct identifier to every author
author_dict = OrderedDict()
author_book_table = {"book_id":list(),"author_id":list()}
for book_id, authors in df_book[["identifier","authors"]].values:
for author in authors:
if author not in author_dict.keys():
author_dict[author] = author_id
author_id += 1
author_book_table["book_id"].append(book_id)
author_book_table["author_id"].append(author_dict[author])
df_author_book = pd.DataFrame.from_dict(author_book_table)
df_author = pd.DataFrame.from_dict({"name":[v for v in author_dict.keys()],
"id":[k for k in author_dict.values()]})
df_author.set_index("id", inplace=True)
df_author.head(3)
df_author_book.head(3)
# drop authors from df_books
df_book.drop(columns=["authors"], inplace=True)
###Output
_____no_output_____
###Markdown
*Note: you don't need to do this: these dataframes are already there!*
###Code
# let's now save our data frames for future use
root_folder = "../data/bl_books/sample_tidy/"
df_book.to_csv(os.path.join(root_folder,"df_book.csv"), index=False)
df_author.to_csv(os.path.join(root_folder,"df_author.csv"), index=False)
df_author_book.to_csv(os.path.join(root_folder,"df_author_book.csv"), index=False)
df_book_text.to_csv(os.path.join(root_folder,"df_book_text.csv"), index=False)
###Output
_____no_output_____
###Markdown
A last look at the tidy dataset**Questions**:* how many authors are there? Are there books with more than one author? And authors who published more than one book?* How many books per genre do we have?* What is the typical year of publication of our books? SQL and relational databasesOur tidy dataset is structured now as a relational database. SQL (Structured Query Language) is the general language to query such databases. https://en.wikipedia.org/wiki/SQLMySQL and Postgresql are common implementation of relational databases.
###Code
from sqlalchemy import create_engine
from sqlalchemy.types import Integer
engine = create_engine('sqlite://', echo=False)
df_book.to_sql('books', con=engine, index=False, dtype={"datefield": Integer()})
engine.execute("SELECT * FROM books LIMIT 3").fetchall()
# create a data frame from a DB table
df_books_fromDB = pd.read_sql_query("SELECT * FROM books", engine)
df_books_fromDB.head(3)
###Output
_____no_output_____ |
workspace/insurance_ml.ipynb | ###Markdown
Insurance MLpredict risk of accidents
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
tf.random.set_seed(42)
import numpy as np
np.__version__
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (20, 8)
mpl.rcParams['axes.titlesize'] = 24
mpl.rcParams['axes.labelsize'] = 20
# !pip install -q dtreeviz
# https://github.com/parrt/dtreeviz
import dtreeviz
dtreeviz.__version__
# https://github.com/AndreasMadsen/python-lrcurve
# !pip install -q lrcurve
from lrcurve import KerasLearningCurve
# XXX: THIS IS VERY GENERAL AND CAN BE USED PRETTY MUCH ANYWHERE
from dtreeviz import clfviz
def plot_decision_boundaries(model, X, y_true, x1_range=None, x2_range=None):
_, ax = plt.subplots(figsize=(8,4), dpi=300)
ranges = None
if x1_range and x2_range:
ranges=(x1_range, x2_range)
clfviz(
model, X, y_true,
show=['instances', 'boundaries', 'probabilities', 'misclassified'],
markers=['v', '^', 'd'],
ntiles=50,
ax=ax,
ranges=ranges,
tile_fraction=1.0,
boundary_markersize=1.0,
feature_names=["Age", "Max Speed"],
colors={'class_boundary': 'black',
'tile_alpha': 0.5,
# 'warning' : 'yellow',
'classes':
[None, # 0 classes
None, # 1 class
None, # 2 classes
['#FF8080', '#FFFF80', '#8080FF'], # 3 classes
]
}
)
###Output
_____no_output_____
###Markdown
Step 1: Loading and exploring our data setThis is a database of customers of an insurance company. Each data point is one customer. Risk is expressed as a number between 0 and 1. 1 meaning highest and 0 meaning lowerst risk of having an accident.
###Code
# XXX: why would everyone need to know where the data is being loadded from and what if that changes? also: how to even do that?
import pandas as pd
# df = pd.read_csv('https://raw.githubusercontent.com/DJCordhose/insurance-ml/main/data/insurance-customers-risk-1500.csv')
df = pd.read_csv('../data/insurance-customers-risk-1500.csv')
# XXX: Loading is mandatory, but why analysis of the data in a training notebook?
df.head()
df.describe()
features = ['speed', 'age', 'miles']
import seaborn as sns
# XXX: COLORS ARE WEIRD
plt.figure(figsize=(10, 10))
cm = df.corr()
cm3 = cm.iloc[:3, :3]
hm = sns.heatmap(cm3,
cbar=True,
annot=True,
square=True,
# cmap='Blues',
fmt='.2f',
yticklabels=features,
xticklabels=features)
###Output
_____no_output_____
###Markdown
Step 2: Training a neural network on 2 dimensions of the data
###Code
y = df['group'].values
# add more columns to list to have fewer features to train on
X = df.drop(['risk', 'group', 'miles'], axis='columns').values
# reorder, first age, then speed to match plotting
X = pd.DataFrame(np.array([X[:, 1], X[:, 0]]).T)
X.shape, y.shape
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y)
X_train.shape, X_val.shape, y_train.shape, y_val.shape
### XXX: THERE IS SO MUCH ROOM FOR EXPERIMENT AND MAKING COPIES HERE
from tensorflow.keras.layers import InputLayer, Dense, Dropout, \
BatchNormalization, Activation
num_features = X.shape[1]
dropout = 0.6
model = tf.keras.Sequential()
model.add(InputLayer(name='input', input_shape=(num_features,)))
# model.add(Dense(500, name='hidden1'))
# model.add(Activation('relu'))
# model.add(BatchNormalization())
# model.add(Dropout(dropout))
# model.add(Dense(500, name='hidden2'))
# model.add(Activation('relu'))
# model.add(BatchNormalization())
# model.add(Dropout(dropout))
# model.add(Dense(500, name='hidden3'))
# model.add(Activation('relu'))
# model.add(BatchNormalization())
# model.add(Dropout(dropout))
model.add(Dense(name='output', units=3, activation='softmax'))
model.summary()
%%time
# XXX: this cries for a function with some parameters
BATCH_SIZE = 32
EPOCHS = 50
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(X_train, y_train,
validation_data=(X_val, y_val),
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks=[KerasLearningCurve()],
verbose=0)
# XXX: getting final metrics is very common
train_loss, train_metric = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_metric
test_loss, test_metric = model.evaluate(X_val, y_val, batch_size=BATCH_SIZE)
test_loss, test_metric
# XXX: those plots are happning all the time
plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.title('Loss over epochs')
plt.plot(history.history['loss']);
plt.plot(history.history['val_loss']);
plt.legend(['Training', 'Validation']);
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.title('Accuracy over epochs')
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
# XXX: those are plausibility checks and should be regression tests on quality of the model
model.predict([[48, 100]])
# this should be low risk (group 2)
model.predict([[48, 100]]).argmax()
# assert model.predict([[48, 100]]).argmax() == 2
model.predict([[30, 150]])
# high risk expected
model.predict([[30, 150]]).argmax()
# XXX version without boundaries is straight forward, but one with ranges: which ranges make sense and why?
# plot_decision_boundaries(model, X, y, x1_range=(10, 150), x2_range=(50, 250))
plot_decision_boundaries(model, X, y)
# model.save?
# XXX: loading and saving of model are one-liners, but there are different formats and they are hard to remember
model.save('classifier.h5', save_format='h5')
model.save('classifier', save_format='tf')
!ls -l
!ls -l classifier/
!tar czvf classifier.tgz ./classifier
!ls -l
###Output
_____no_output_____ |
notebooks/01.basic_motion.ipynb | ###Markdown
JetRacer車両制御の基礎ここではJetRacerのモーターとサーボをプログラム的に制御する基礎的な部分を学びます。 Jupyterノートブックではプログラムをブロック毎に実行します。ここに書いてあるプログラムは実際に車体を動かすために使われているプログラム総量のほんのわずかですが、車体制御に関する重要なパラメータ部分を扱います。 それでは実際に実行してみましょう。まずは[NvidiaRacecar](https://github.com/FaBoPlatform/jetracer/blob/master/jetracer/nvidia_racecar.py)クラスをインスタンス化しましょう。
###Code
from jetracer.nvidia_racecar import NvidiaRacecar
car = NvidiaRacecar()
###Output
_____no_output_____
###Markdown
次に、車体固有のパラメータを設定します。 RC Carは通常、HIGHになる電圧時間が **1000μsから2000μs** の範囲となる信号波を **66.6Hz** の周期で流す **PWM信号** で制御されています。 実際はメーカーや機器によって様々な周期で動作しています。(50Hzから400Hz超えまで) * ステアリング操作はPWM信号をサーボに流すことでおこないます。* スロットル操作はPWM信号をESC(エレクトリック・スピード・コントローラ)に流すことでおこないます。 PWM制御できるRC Carは、プログラム制御する際にはほぼ例外なくPCA9685が使われています。JetRacerでもAdafruitのライブラリを使ってPCA9685を利用しています。そのため、**ここでは直接PWMを扱うコードは出てきません**が、RC Carを制御するためのいくつかの知識が必要になります。 タミヤTT-02 XBに搭載されているメカ(サーボ、ESC)の場合、制御範囲のPWM値はおおむね以下の値になります。* ESCのニュートラルは1520μs (信号電圧がHIGHになっている時間)* 前進方向:1480 - 1100μs (プロポの信号)* 後進方向:1600 - 1960μs (プロポの信号)* ニュートラル範囲:1480 - 1600μs* サーボのニュートラルは1520μs* 左方向:1520 - 1100μs (プロポの信号。実際のところ、物理的に動作可能な範囲は1250μs程度まで)* 右方向:1520 - 1960μs (プロポの信号。実際のところ、物理的に動作可能な範囲は1750μs程度まで)* 周期:66.67Hzこれらの値は送信機のトリム調整(ニュートラル調整)で変わってきます。受信機のGNDと信号線をオシロスコープに接続して観察することができます。 車体や組み立て方の違いや、メーカー、メカ、設定の違いによっては前後左右が逆の場合や、ニュートラルやエンドポイントが異なることがあります。 そして**最も重要な点**ですが、JetRacerは**adafruit-circuitpython-servokit-1.2.2**ライブラリを使ってPCA9685を制御することでPWM信号を生成しています。このライブラリは設定値[-1.0, 1.0]の範囲において、**[760, 2280]μs**、**50Hz**のPWM信号を発信します。これはRC CarのPWM範囲としては多すぎるため、サーボやモーターの動作として適切な範囲に制限する必要があります。 エンドポイント設定RC Carのミドルクラス以上の送信機では標準的な機能となる**エンドポイント**と同じ機能を設定します。 物理的にそれ以上左右に切れない状態でさらにステアリングを動作させようとするとサーボが壊れてしまうため、これ以上動作させないためのエンドポイントの設定はとても重要になります。 エンドポイントではサーボとモーターの動作の限界点を設定します。 |パラメータ|機能|値範囲|解説||:--|:--|:--|:--||car.steering_min_endpoint|左切れ角のエンドポイント|[-1.0, 1.0]|TT-02の場合は**-0.3**付近がちょうどいい値です。**steering_gain=1.0**、**steering_offset=0.0**、**steering=-1.0**の時にフロントタイヤが左いっぱいに切れている状態で、サーボからジリジリ音がしない値を設定します。0.01くらいの小さい範囲で調整します。||car.steering_max_endpoint|右切れ角のエンドポイント|[-1.0, 1.0]|TT-02の場合は**0.3**付近がちょうどいい値です。**steering_gain=1.0**、**steering_offset=0.0**、**steering=1.0**の時にフロントタイヤが右いっぱいに切れている状態で、サーボからジリジリ音がしない値を設定します。0.01くらいの小さい範囲で調整します。||car.throttle_min_endpoint|後進のエンドポイント|[-1.0, 1.0]|TT-02の場合は**-0.69**付近がちょうどいい値です。**throttle_gain=1.0**、**throttle_offset=0.0**、**throttle=-1.0**の時にモーターが最大速度で後進する値を設定します。||car.throttle_max_endpoint|前進のエンドポイント|[-1.0, 1.0]|TT-02の場合は**0.69**付近がちょうどいい値です。**throttle_gain=1.0**、**throttle_offset=0.0**、**throttle=1.0**の時にモーターが最大速度で前進する値を設定します。| ゲイン設定ゲインではサーボとモーターの値に適用率を設定します。 車両の基本操作ではステアリングとスロットルは共に**1.0**としておくことで車両は基本性能を発揮できますが、最初はスロットルゲインを**0.3**程度に設定しておいたほうが安全です。 |パラメータ|機能|値範囲|解説||:--|:--|:--|:--||car.steering_gain|ステアリング適用率|[-1.0, 1.0]|TT-02の場合は**1.0**にしておきます。**car.steering_gain**の値がプラスかマイナスかは、車種毎に固定になります(サーボの取り付け向きで決まります)。**car.steering**がプラスの時に右、マイナスの時に左にステアリングが切れるように**car.steering_gain**のプラスマイナスを決めます。||car.throttle_gain|スロットル適用率|[-1.0, 1.0]|TT-02の場合は最初は**-0.3**にしておきます。速度に慣れてきたら**-1.0**まで上げることができます。プラスかマイナスかはESCの仕様で決まります。**car.throttle**がプラスの時に前進するように**car.throttle_gain**のプラスマイナスを決めます。| 初期値とオフセットステアリングとスロットルの初期値とオフセットを設定します。 |パラメータ|機能|値範囲|解説||:--|:--|:--|:--||car.steering|左右ステアリング値|[-1.0, 1.0]|現在のステアリングの値。0.0がニュートラル位置(理論上まっすぐ進む状態。実際は車体のがたつき、ゆがみ等でまっすぐ進まないことが多いです)。||car.steering_offset|ステアリングニュートラル補正値|[-1.0, 1.0]|車体がまっすぐ走行する位置に設定します。TT-02ノーマル車体の場合はステアリングのがたつきが大きく、完全にまっすぐ走行させることは不可能ですので、だいたいまっすぐ走行できればOKです。||car.throttle|前後スロットル値|[-1.0, 1.0]|現在のスロットルの値。0.0がニュートラル位置。||car.throttle_offset|スロットルニュートラル補正値|[-1.0, 1.0]|何もしないときに車体が停止する値に設定します。|
###Code
# 車両パラメータを初期化します
car.steering_min_endpoint = -0.3 # 左切れ角のエンドポイント
car.steering_max_endpoint = 0.3 # 右切れ角のエンドポイント
car.throttle_min_endpoint = -0.69 # 後進のエンドポイント
car.throttle_max_endpoint = 0.69 # 前進のエンドポイント
car.steering = 0
car.steering_gain = 1.0
car.steering_offset = 0
car.throttle = 0
car.throttle_gain = -0.3
car.throttle_offset = 0
###Output
_____no_output_____
###Markdown
車両制御それでは実際に車両を制御するためのスライドバーと入力ボックスを表示して、車両を動かしてみましょう。 車両が動くので、周囲の安全を確保してから操作してください。 * **throttle**スライダー:上下にスライドするとタイヤがまわります。少しずつスライドしてください。この時、タイヤが後転する場合は**throttle_gain**のプラスマイナスを反対にしてください。後転させるためには、いちどニュートラル状態にしてから下にスライドする、ダブルアクション操作が必要になります。前進中に急にスライドを下にさげると、後転ではなくブレーキになります。* **steering**スライダー:左右にスライドさせると、ステアリングが左右に動作します。左右逆に動作する場合は**steering_gain**のプラスマイナスを反対にしてください。**steering_min_endpoint**、**steering_max_endpoint**、**throttle_min_endpoint**、**throttle_max_endpoint**、**steering_gain**、**steering_offset**、**throttle_gain**、**throttle_offset**の適切な値をみつけて、メモに残しておいてください。これらの値は自動走行時の車両パラメータにも設定することになります。
###Code
import ipywidgets.widgets as widgets
from IPython.display import display
import traitlets
# create two sliders with range [-1.0, 1.0]
style = {'description_width': 'initial'}
steering_slider = widgets.FloatSlider(description='steering', style=style, min=-1.0, max=1.0, step=0.01, orientation='horizontal')
steering_gain = widgets.BoundedFloatText(description='steering_gain', style=style ,min=-1.0, max=1.0, step=0.01, value=car.steering_gain)
steering_offset = widgets.BoundedFloatText(description='steering_offset', style=style, min=-1.0, max=1.0, step=0.01, value=car.steering_offset)
throttle_slider = widgets.FloatSlider(description='throttle', style=style, min=-1.0, max=1.0, step=0.01, orientation='vertical')
throttle_gain = widgets.BoundedFloatText(description='throttle_gain', style=style, min=-1.0, max=1.0, step=0.01, value=car.throttle_gain)
throttle_offset = widgets.BoundedFloatText(description='throttle_offset', style=style, min=-1.0, max=1.0, step=0.01, value=car.throttle_offset)
# create a horizontal box container to place the sliders next to eachother
slider_container = widgets.HBox([throttle_slider, steering_slider])
slider_container.layout.align_items='center'
value_container = widgets.VBox([steering_gain, steering_offset, throttle_gain, throttle_offset])
control_container = widgets.HBox([slider_container, value_container])
control_container.layout.align_items='center'
# display the container in this cell's output
display(control_container)
# links
steering_link = traitlets.link((steering_slider, 'value'), (car, 'steering'))
steering_gain_link = traitlets.link((steering_gain, 'value'), (car, 'steering_gain'))
steering_offset_link = traitlets.link((steering_offset, 'value'), (car, 'steering_offset'))
throttle_link = traitlets.link((throttle_slider, 'value'), (car, 'throttle'))
throttle_gain_link = traitlets.link((throttle_gain, 'value'), (car, 'throttle_gain'))
throttle_offset_link = traitlets.link((throttle_offset, 'value'), (car, 'throttle_offset'))
###Output
_____no_output_____ |
AirBnB_Submission_1.ipynb | ###Markdown
AirBnB Price Prediction
###Code
# import the library
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# sklearn :: utils
from sklearn.model_selection import train_test_split
# sklearn :: models
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
# sklearn :: evaluation metrics
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
sns.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
Load the csv Data Files into Dataframe
###Code
df_train = pd.read_csv('data/train.csv')
df_test = pd.read_csv('data/test.csv')
print(df_train.shape, df_test.shape)
###Output
_____no_output_____
###Markdown
Handline Missing Values & Convert Data Type
###Code
print(df_train.columns)
df_train.head()
# Data types of the Feature
df_train.dtypes
# Find the missing values
print(df_train.isnull().sum())
df_missing = df_train.filter(['bathrooms', 'first_review', 'last_review', 'host_has_profile_pic', 'host_identity_verified',
'host_response_rate', 'host_since', 'neighbourhood', 'review_scores_rating','zipcode'])
df_missing
# Transform object/string Date/Time data to datetime
df_train['first_review'] = pd.to_datetime(df_train['first_review'])
df_train['last_review'] = pd.to_datetime(df_train['last_review'])
df_train['host_since'] = pd.to_datetime(df_train['host_since'])
df_train['host_since_year'] = df_train['host_since'].dt.year
print(round(df_train['host_since_year'].mean(skipna=True)))
df_train['host_since_year'].fillna(round(df_train['host_since_year'].mean()), inplace=True)
# df_train
# Replace NaN with Mean value in bathroom feature / column
df_train['bathrooms'].fillna(round(df_train['bathrooms'].mean()), inplace=True)
# Replace NaN with Mean value in bedrooms feature / column
df_train['bedrooms'].fillna(round(df_train['bedrooms'].mean()), inplace=True)
# Replace NaN with Mean value in bedrooms feature / column
df_train['beds'].fillna(round(df_train['beds'].mean()), inplace=True)
# Replace NaN with Mean value in review_scores_rating feature / column
df_train['review_scores_rating'].fillna(round(df_train['review_scores_rating'].mean()), inplace=True)
# Delete % sign from host_response_rate data and convert the data from object to integer
df_train['host_response_rate'] = df_train['host_response_rate'].str.replace('%', '')
df_train['host_response_rate'].fillna(0, inplace=True)
# Convert data type to Integer
df_train['host_response_rate'] = df_train['host_response_rate'].astype(int)
# Mean of host_response_rate without considering 0 values
mean_host_response_rate = round(df_train['host_response_rate'].mean(skipna=True))
# Replace 0 with Mean value
df_train['host_response_rate'].mask(df_train['host_response_rate'] == 0, mean_host_response_rate, inplace=True)
# Replace t with 1, f with 0 and NaN with 0 of host_identity_verified feature
df_train['host_identity_verified'].mask(df_train['host_identity_verified'] == "t", "1", inplace=True)
df_train['host_identity_verified'].mask(df_train['host_identity_verified'] == "f", "0", inplace=True)
df_train['host_identity_verified'].fillna(0.0, inplace=True)
# COnvert Data Type to Float
df_train['host_identity_verified'] = df_train['host_identity_verified'].astype(float)
# Replace t with 1, f with 0 and NaN with 0 of host_identity_verified feature
df_train['host_has_profile_pic'].mask(df_train['host_has_profile_pic'] == "t", "1", inplace=True)
df_train['host_has_profile_pic'].mask(df_train['host_has_profile_pic'] == "f", "0", inplace=True)
df_train['host_has_profile_pic'].fillna(0.0, inplace=True)
# Convert Data Type to Float
df_train['host_has_profile_pic'] = df_train['host_has_profile_pic'].astype(float)
# Replace t with 1, f with 0 and NaN with 0 of host_identity_verified feature
df_train['instant_bookable'].mask(df_train['instant_bookable'] == "t", "1", inplace=True)
df_train['instant_bookable'].mask(df_train['instant_bookable'] == "f", "0", inplace=True)
# Convert Data Type to Float
df_train['instant_bookable'] = df_train['instant_bookable'].astype(int)
df_train['room_type'].value_counts()
df_test['room_type'].value_counts()
df_train.groupby(by='room_type')['log_price'].mean()
# Find the missing values
print(df_train.isnull().sum())
###Output
_____no_output_____
###Markdown
Feature Re-Engineering
###Code
#List unique values of a Feature / Column
# df_train['zipcode'].value_counts()
# Create new features from city
df_city = pd.get_dummies(df_train['city'])
df_train = pd.concat([df_train, df_city], axis=1)
# Create new features from property_type
df_property_type = pd.get_dummies(df_train['property_type'])
df_train = pd.concat([df_train, df_property_type], axis=1)
# Create new features from bed_type
df_bed_type = pd.get_dummies(df_train['bed_type'])
df_train = pd.concat([df_train, df_bed_type], axis=1)
# Create new features from room_type
df_room_type = pd.get_dummies(df_train['room_type'])
df_train = pd.concat([df_train, df_room_type], axis=1)
df_train.head(10)
# Correlation
df_temp = df_train.filter(['log_price', 'accommodates', 'bathrooms', 'bedrooms', 'beds', 'Couch', 'Real Bed', 'Shared room', 'Entire home/apt',
'Private room', 'SF', 'instant_bookable'], axis=1)
df_temp.corr()
# select the columns
# X_columns = ['accommodates', 'bathrooms', 'bedrooms', 'beds', 'number_of_reviews', 'review_scores_rating']
X_columns = ['accommodates', 'bathrooms', 'bedrooms', 'beds', 'Real Bed', 'Shared room', 'Entire home/apt',
'Private room', 'SF']
# X_columns = ['accommodates', 'bathrooms', 'bedrooms', 'beds', 'cleaning_fee']
y_column = ['log_price']
# handle missing values
df_train = df_train[X_columns + y_column]
print(df_train.shape)
df_train = df_train.fillna(0.0) # probably not a good idea for 'review_scores_rating'
print(df_train.shape)
###Output
_____no_output_____
###Markdown
Experiment
###Code
# split the data using sklearn
threshold = 0.7
X = df_train[X_columns]
y = df_train[y_column]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1.0-threshold, shuffle=True)
print('X_train', X_train.shape)
print('y_train', y_train.shape)
print('X_test', X_test.shape)
print('y_test', y_test.shape)
def model_training(model_name, model, X_train, y_train):
model.fit(X_train, y_train)
return model
def model_prediction(model, X_test):
y_pred = model.predict(X_test)
return y_pred
def model_evaluation(model_name, y_test, y_pred):
print(model_name)
print('MAE', mean_absolute_error(y_test, y_pred))
print('RMSE', np.sqrt(mean_squared_error(y_test, y_pred)))
# plt.scatter(y_test, y_pred, alpha=0.3)
# plt.plot(range(0,5000000, 100), range(0,5000000, 100), '--r', alpha=0.3, label='Line1')
# plt.title(model_name)
# plt.xlabel('True Value')
# plt.ylabel('Predict Value')
# plt.xlim([0, 5000000])
# plt.ylim([0, 5000000])
# plt.show()
print('')
def run_experiment(model_name, model, X_train, y_train, X_test):
train_model = model_training(model_name, model, X_train, y_train)
predictions = model_prediction(train_model, X_test)
model_evaluation(model_name, y_test, predictions)
run_experiment('Linear Regression', LinearRegression(), X_train, y_train, X_test)
run_experiment('KNN 5', KNeighborsRegressor(5), X_train, y_train, X_test)
run_experiment('KNN 2', KNeighborsRegressor(2), X_train, y_train, X_test)
run_experiment('Decision Tree', DecisionTreeRegressor(), X_train, y_train, X_test)
run_experiment('Random Forest 10', RandomForestRegressor(10), X_train, y_train, X_test)
run_experiment('Random Forest 100', RandomForestRegressor(100), X_train, y_train, X_test)
run_experiment('Gradient Boosting', GradientBoostingRegressor(), X_train, y_train, X_test)
###Output
_____no_output_____
###Markdown
Model Training
###Code
# train a linear regression
model = GradientBoostingRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
print('RMSE', round(rmse, 2))
plt.scatter(y_test, y_pred, alpha=0.3)
plt.plot(range(0,10), range(0,10), '--r', alpha=0.3, label='Line1')
plt.title('Gradient Boosting')
plt.xlabel('True Value')
plt.ylabel('Predict Value')
plt.show()
###Output
_____no_output_____
###Markdown
Prepare submission
###Code
# Create new features from city
df_city = pd.get_dummies(df_test['city'])
df_test = pd.concat([df_test, df_city], axis=1)
# Create new features from property_type
df_property_type = pd.get_dummies(df_test['property_type'])
df_test = pd.concat([df_test, df_property_type], axis=1)
# Create new features from bed_type
df_bed_type = pd.get_dummies(df_test['bed_type'])
df_test = pd.concat([df_test, df_bed_type], axis=1)
# Create new features from room_type
df_room_type = pd.get_dummies(df_test['room_type'])
df_test = pd.concat([df_test, df_room_type], axis=1)
df_prediction = df_test[X_columns].fillna(0.0)
df_test['log_price'] = model.predict(df_prediction)
df_test[['id', 'log_price']]
df_test[['id', 'log_price']].to_csv('Submission/AirBnB_Submission_1.csv', index=False)
###Output
_____no_output_____ |
Random Forest EI.ipynb | ###Markdown
Enseignement d'Intégration - Doctolib Notebook for the EI of the ST4 at CentraleSupélec. We want to use the data given by Doctolib on their patients to be able to predict if a new patient is going to be present or absent Introductory code Importing Libraries
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import confusion_matrix, roc_curve, auc, mean_squared_error, precision_recall_curve
from sklearn.metrics import f1_score, average_precision_score, balanced_accuracy_score, precision_score, recall_score
import seaborn as sns
from treeinterpreter import treeinterpreter as ti
from sklearn.tree import export_graphviz
import pydotplus
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Reading Data Here we want to read a part of the csv given and maybe plot a few variables
###Code
filename = "encoded_data_all.csv"
df = pd.read_csv(filename, low_memory=False)
data = df
data.drop(["Unnamed: 0"], axis = 1, inplace = True)
data.head(5)
###Output
_____no_output_____
###Markdown
Random Forest Try to clasify patients using random forest with different weights. The no shows are a minoritary class Ratio of No Shows
###Code
print("Number of no shows: ", len(data.loc[data["no_show"] == 1]))
print("Number of shows: ", len(data.loc[data["no_show"] == 0]))
print("Ratio: {:.2f}%".format(100*(len(data.loc[data["no_show"] == 1])/len(data.loc[data["no_show"] == 0]))))
###Output
Number of no shows: 227373
Number of shows: 3011739
Ratio: 7.55%
###Markdown
Separating Test and Train sets
###Code
features = list(data.columns)
features.remove('no_show')
y = data['no_show']
X = data[features]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 10)
###Output
_____no_output_____
###Markdown
Evaluating parameters for the RF We will use cross-validation to find the best weight, depth and number of estimators for the RF Performance Evaluation For the RF we want to plot the confusion matrix, the ROC curve and see training and test errors
###Code
# rf_classifier = RandomForestClassifier()
# cross_validation = ShuffleSplit(test_size = 0.2)
# clf = GridSearchCV(estimator = rf_classifier,
# param_grid = {'n_estimators': [10, 20],
# 'max_depth': [5, 10, 20, 25],
# 'class_weight': [{0:1, 1:1}, {0:1, 1:10}, {0:1, 1:25}]},
# scoring = ‘f1_weighted’,
# cv = cross_validation)
clf = RandomForestClassifier(n_estimators = 20, max_depth = 23, class_weight = {0:1, 1:25})
params = clf.get_params
clf.fit(X_train, y_train)
y_train_hat = clf.predict(X_train)
y_test_hat = clf.predict(X_test)
print('Random Forest Parameters')
print(params)
###Output
Random Forest Parameters
<bound method BaseEstimator.get_params of RandomForestClassifier(bootstrap=True, class_weight={0: 1, 1: 25},
criterion='gini', max_depth=23, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=20, n_jobs=None, oob_score=False,
random_state=None, verbose=0, warm_start=False)>
###Markdown
Feature Importance
###Code
feature_importance = clf.feature_importances_
fig = plt.figure(figsize = (6,6))
ax = fig.add_subplot(1,1,1)
ax.bar(X.columns, feature_importance)
ax.set_title('Feature Importance')
ax.set_xlabel('Column Number')
ax.set_ylabel('Gini Importance')
ax.set_xticklabels([])
N = 15
important_features_values = pd.Series(feature_importance).sort_values(ascending = False).iloc[:N]
important_features_index = list(N_most_important_features.index.values)
important_features = []
important_features_values
X.columns
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
CM = confusion_matrix(y_test, y_test_hat)
sns.heatmap(CM, annot=True)
###Output
_____no_output_____
###Markdown
Performance Measures
###Code
recall = CM[1,1]/(CM[1,0]+CM[1,1]) # tp/(tp+fn)
precision = CM[1,1]/(CM[0,1]+CM[1,1]) # # tp/(tp+fp)
tnr = CM[0,0]/(CM[0,1]+CM[0,0]) # tn/(tn+fp) Acc-
tpr = recall # Acc+
f_measure = 2*precision*recall/(precision+recall)
g_mean = (tpr*tnr) ** 0.5
weighted_accuracy = 0.5 * tnr + 0.5 * tpr
print('True Negative Rate (Acc-): {:.2f}%'.format(100*tnr))
print('True Positive Rate (Acc+): {:.2f}%'.format(100*tpr))
print('G-Mean: {:.2f}%'.format(100*g_mean))
print('Weighted Accuracy: {:.2f}%'.format(100*weighted_accuracy))
print('Precision: {:.2f}%'.format(100*precision))
print('Recall: {:.2f}%'.format(100*recall))
print('F-measure: {:.2f}%'.format(100*f_measure))
###Output
True Negative Rate (Acc-): 62.88%
True Positive Rate (Acc+): 65.85%
G-Mean: 64.35%
Weighted Accuracy: 64.36%
Precision: 11.82%
Recall: 65.85%
F-measure: 20.04%
###Markdown
ROC Curve
###Code
fpr, tpr, _ = roc_curve(y_test, clf.predict_proba(X_test)[:, 1])
area=auc(fpr,tpr)
fig=plt.figure()
lw = 2
plt.plot(fpr,tpr,color="darkred", lw=lw, label="ROC curve RF : AUC = {:.3f}".format(area))
plt.plot([0,1], [0,1], color="navy", lw=lw, linestyle="--")
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.05)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("Receiver operating characteristic curve")
plt.legend(loc="lower right")
###Output
_____no_output_____
###Markdown
Precision Recall Curve
###Code
prec, rec, _ = precision_recall_curve(y_test, clf.predict_proba(X_test)[:, 1])
fig=plt.figure()
plt.plot(prec, rec ,color="darkred", lw=2, label="Precision-Recall Curve RF")
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.05)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Precision-Recall Curve")
###Output
_____no_output_____ |
docs/contents/user/Dimensionality.ipynb | ###Markdown
Dimensionality and CompatibilityThe dimensional analysis of a quantity, no matter the form, can be performed invoking the method `pyunitwizard.dimensionality()`:
###Code
import pyunitwizard as puw
puw.configure.load_library(['pint', 'openmm.unit'])
q = puw.quantity(1.4, 'kJ/mol', form='openmm.unit')
puw.get_dimensionality(q)
###Output
_____no_output_____
###Markdown
Let's see a second example:
###Code
q = puw.quantity('3.5N/(2.0nm**2)')
puw.get_dimensionality(q)
###Output
_____no_output_____
###Markdown
Where dimensions correspond to the following fundamental quantities: | Fundamental Quantity | Dimension || -------------------- | --------- || Length | [L] || Mass | [M] || Time | [T] || Temperature | [K] || Substance | [mol] || Electric Current | [A] || Luminous Intensity | [Cd] | In addition, PyUnitWizard can check the dimensional compatibility between quantities with `pyunitwizard.compatibility()`, again, regardless their pythonic form:
###Code
q1 = puw.quantity(1.0, 'meter', form='openmm.unit')
q2 = puw.quantity(1.0, 'centimeter', form='openmm.unit')
puw.compatibility(q1, q2)
q1 = puw.quantity(1.0, 'kJ/mol', form='openmm.unit')
q2 = puw.quantity(1.0, 'kcal/mol', form='pint')
puw.compatibility(q1, q2)
q1 = puw.quantity(1.0, 'nm**3', form='pint')
q2 = puw.quantity(1.0, 'litre', form='pint')
puw.compatibility(q1, q2)
q1 = puw.quantity(1.0, 'nm**3', form='pint')
q2 = puw.quantity(1.0, 'ps', form='openmm.unit')
puw.compatibility(q1, q2)
q1 = puw.quantity(1.0, 'degrees', form='pint')
q2 = puw.quantity(1.0, 'radians', form='pint')
puw.compatibility(q1, q2)
q1 = puw.quantity(1.0, 'degrees', form='openmm.unit')
q2 = puw.quantity(1.0, 'hertzs', form='pint')
puw.compatibility(q1, q2)
###Output
_____no_output_____ |
2-Kaggle-Diabetic-Retinopathy/Eye_Analysis_V1.ipynb | ###Markdown
Kaggle Diabetic Retinopathy Detection AnalysisThis notebook does basic analysis on the training images.Link to competition: https://www.kaggle.com/c/aptos2019-blindness-detectionForked from "Orig_TFDataset_Analysis_V01"Sample pandas examples:https://github.com/rasbt/pattern_classification/blob/master/data_viz/matplotlib_viz_gallery.ipynbhttps://github.com/rasbt/pattern_classification/blob/master/data_viz/matplotlib_viz_gallery.ipynb Processing for using Google Drive, Kaggle and normal includes
###Code
#"""
# Google Collab specific stuff....
from google.colab import drive
drive.mount('/content/drive')
import os
!ls "/content/drive/My Drive"
USING_COLLAB = True
# Force to use 2.x version of Tensorflow
%tensorflow_version 2.x
#"""
# Upload your "kaggle.json" file that you created from your Kaggle Account tab
# If you downloaded it, it would be in your "Downloads" directory
from google.colab import files
files.upload()
# To start, install kaggle libs
#!pip install -q kaggle
# Workaround to install the newest version
# https://stackoverflow.com/questions/58643979/google-colaboratory-use-kaggle-server-version-1-5-6-client-version-1-5-4-fai
!pip install kaggle --upgrade --force-reinstall --no-deps
# On your VM, create kaggle directory and modify access rights
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!ls ~/.kaggle
!chmod 600 /root/.kaggle/kaggle.json
#!kaggle competitions list
# Takes about 4 mins to download
!kaggle competitions download -c aptos2019-blindness-detection
# Takes about 5 mins to unzip
!unzip -uq aptos2019-blindness-detection.zip
!ls
# Cleanup to add some space....
!rm -r test_images
!rm aptos2019-blindness-detection.zip
# Setup sys.path to find MachineLearning lib directory
# Check if "USING_COLLAB" is defined, if yes, then we are using Colab, otherwise set to False
try: USING_COLLAB
except NameError: USING_COLLAB = False
%load_ext autoreload
%autoreload 2
# set path env var
import sys
if "MachineLearning" in sys.path[0]:
pass
else:
print(sys.path)
if USING_COLLAB:
sys.path.insert(0, '/content/drive/My Drive/GitHub/MachineLearning/lib') ###### CHANGE FOR SPECIFIC ENVIRONMENT
else:
sys.path.insert(0, '/Users/john/Documents/GitHub/MachineLearning/lib') ###### CHANGE FOR SPECIFIC ENVIRONMENT
print(sys.path)
# Normal includes...
from __future__ import absolute_import, division, print_function, unicode_literals
import os, sys, random, warnings, time, copy, csv
import numpy as np
import IPython.display as display
from PIL import Image
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
print(tf.__version__)
# This allows the runtime to decide how best to optimize CPU/GPU usage
AUTOTUNE = tf.data.experimental.AUTOTUNE
from TrainingUtils import *
#warnings.filterwarnings("ignore", category=DeprecationWarning)
#warnings.filterwarnings("ignore", category=UserWarning)
warnings.filterwarnings("ignore", "(Possibly )?corrupt EXIF data", UserWarning)
###Output
_____no_output_____
###Markdown
General Setup- Create a dictionary wrapped by a class for global values. This is how I manage global vars in my notebooks.
###Code
# Set root directory path to data
if USING_COLLAB:
#ROOT_PATH = "/content/drive/My Drive/ImageData/KaggleDiabeticRetinopathy/Data" ###### CHANGE FOR SPECIFIC ENVIRONMENT
ROOT_PATH = "" ###### CHANGE FOR SPECIFIC ENVIRONMENT
else:
ROOT_PATH = ""
# Establish global dictionary
parms = GlobalParms(ROOT_PATH=ROOT_PATH,
TRAIN_DIR="train_images",
NUM_CLASSES=5,
CLASS_NAMES=['Normal', 'Moderate', 'Mild', 'Proliferative', 'Severe'],
IMAGE_ROWS=224,
IMAGE_COLS=224,
IMAGE_CHANNELS=3,
BATCH_SIZE=1, # must be one if you want to see different image sizes
IMAGE_EXT=".png")
parms.print_contents()
print("Classes: {} Labels: {} {}".format(parms.NUM_CLASSES, len(parms.CLASS_NAMES), parms.CLASS_NAMES) )
# Simple helper method to display batches of images with labels....
def show_batch(image_batch, label_batch, number_to_show=25, r=5, c=5, print_shape=False):
show_number = min(number_to_show, parms.BATCH_SIZE)
if show_number < 8: #if small number, then change row, col and figure size
if parms.IMAGE_COLS > 64 or parms.IMAGE_ROWS > 64:
plt.figure(figsize=(25,25))
else:
plt.figure(figsize=(10,10))
r = 4
c = 2
else:
plt.figure(figsize=(10,10))
for n in range(show_number):
if print_shape:
print("Image shape: {} Max: {} Min: {}".format(image_batch[n].shape, np.max(image_batch[n]), np.min(image_batch[n])))
ax = plt.subplot(r,c,n+1)
plt.imshow(tf.keras.preprocessing.image.array_to_img(image_batch[n]))
plt.title(parms.CLASS_NAMES[np.argmax(label_batch[n])])
plt.axis('off')
###Output
_____no_output_____
###Markdown
Load csv file- Load list of filenames and diagnosis- Perform initiall analysis on dataframe
###Code
train_df = pd.read_csv(os.path.join(parms.ROOT_PATH, "train.csv"))
train_df["file_path"] = parms.TRAIN_PATH + "/" + train_df["id_code"] + ".png"
images_list_len = len(train_df)
print("Training set is {}".format(len(train_df)))
train_df.head()
train_df['diagnosis'].hist()
train_df['diagnosis'].value_counts()
# Plot diagnosis
sizes = train_df.diagnosis.value_counts()
fig1, ax1 = plt.subplots(figsize=(10,7))
ax1.pie(sizes, labels=parms.CLASS_NAMES, autopct='%1.1f%%', shadow=True, startangle=90)
ax1.axis("Equal")
plt.title("Diabetic retinopathylabels")
plt.show()
###Output
_____no_output_____
###Markdown
Create dataset and normal mappingsPipeline Flow:create dataset -> map "process_path" -> repeat forever -> batchThe mappings open and read an image. These next cells should be changed based on your specific needs.
###Code
# Decode the image, convert to float, normalize by 255 and resize
def decode_img(image: tf.Tensor) -> tf.Tensor:
# convert the compressed string to a 3D uint8 tensor
image = tf.image.decode_png(image, channels=parms.IMAGE_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
image = tf.image.convert_image_dtype(image, parms.IMAGE_DTYPE)
# uncomment to resize the image to the desired size.
#image = tf.image.resize(image, [parms.IMAGE_ROWS, parms.IMAGE_COLS])
#gamma = tf.math.reduce_mean(image) + 0.5
#image = tf.image.adjust_gamma(image, gamma=gamma)
return image
# method mapped to load, resize and aply any augmentations
def process_path(file_path: tf.Tensor, label: tf.Tensor) -> tf.Tensor:
# load the raw data from the file as a string
image = tf.io.read_file(file_path)
image = decode_img(image)
return image, label
###Output
_____no_output_____
###Markdown
Create dataset from list of images and apply mappings
###Code
# Create Dataset from list of images
full_dataset = tf.data.Dataset.from_tensor_slices(
(train_df["file_path"].values,
tf.cast(train_df['diagnosis'].values, tf.int32)))
# Verify image paths were loaded and save one path for later in "some_image"
for f, l in full_dataset.take(2):
some_image = f.numpy().decode("utf-8")
print(f.numpy(), l.numpy())
print("Some Image: ", some_image)
# map training images to processing, includes any augmentation
full_dataset = full_dataset.map(process_path, num_parallel_calls=AUTOTUNE)
# Verify the mapping worked
for image, label in full_dataset.take(1):
print("Image shape: {} Max: {} Min: {}".format(image.numpy().shape, np.max(image.numpy()), np.min(image.numpy())))
print("Label: ", label.numpy())
# Repeat forever
full_dataset = full_dataset.repeat()
# set the batch size
full_dataset = full_dataset.batch(parms.BATCH_SIZE)
# Show the images, execute this cell multiple times to see the images
steps = 1
for image_batch, label_batch in tqdm(full_dataset.take(steps)):
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
_____no_output_____
###Markdown
Collect image informationThis will loop over each image and collect information to be used to create a Pandas dataframe. The dataframe will then be used to report information. You can also save the dataframe for future analysis.This is where you can also customize what information is collected.The size of the image is not changed, but you can change so every image is exactly like how it will be used for training. I've found that looking at the raw image information is more helpful than looking at images that have been resized.
###Code
# Collect various information about an image
def dataset_analysis(dataset, steps, test=False):
if test == True:
steps = 4
image_info = []
for image_batch, label_batch in tqdm(dataset.take(steps)):
#show_batch(image_batch.numpy(), label_batch.numpy())
for j in range(parms.BATCH_SIZE):
image = image_batch[j].numpy()
label = label_batch[j].numpy()
#label = np.argmax(label)
r = image.shape[0]
c = image.shape[1]
d = 0
mean0=0
mean1=0
mean2=0
if parms.IMAGE_CHANNELS == 3:
d = image.shape[2]
mean0 = np.mean(image[:,:,0])
mean1 = np.mean(image[:,:,1])
mean2 = np.mean(image[:,:,2])
image_info.append([label, r, c, d, np.mean(image), np.std(image), mean0, mean1, mean2])
if test:
print(image_info[-1])
return image_info
# Build image_info list
steps = np.ceil(len(train_df) // parms.BATCH_SIZE)
image_info = dataset_analysis(full_dataset, steps=steps, test=False)
# Build pandas dataframe
image_info_df = pd.DataFrame(image_info, columns =['label', 'row','col', 'dim', 'mean', 'std', "chmean0", "chmean1", "chmean2"])
print(image_info_df.describe())
image_info_df.head()
#https://jamesrledoux.com/code/group-by-aggregate-pandas
image_info_df.groupby('label').agg({'mean': ['count', 'mean', 'min', 'max'], 'std': ['mean', 'min', 'max'], 'row': ['mean', 'min', 'max'],'col': ['mean', 'min', 'max'], 'chmean0':['mean'],'chmean1':['mean'],'chmean2':['mean'] })
image_info_df.agg({'mean': ['mean', 'min', 'max'], 'std': ['mean', 'min', 'max'], 'row': ['mean', 'min', 'max'],'col': ['mean', 'min', 'max'] })
image_mean = image_info_df["mean"]
print("Mean: ", np.mean(image_mean), " STD: ", np.std(image_mean))
image_info_df["label"].value_counts().plot.bar()
image_info_df["label"].value_counts().plot.pie()
image_info_df.hist(column='mean')
image_info_df.plot.scatter(x='row', y='col', color='Blue', label='Row Col')
# Plot Histograms and KDE plots for images from the training set
# Source: https://www.kaggle.com/chewzy/eda-weird-images-with-new-updates
import seaborn as sns
plt.figure(figsize=(14,6))
plt.subplot(121)
sns.distplot(image_info_df["col"], kde=False, label='Train Col')
sns.distplot(image_info_df["row"], kde=False, label='Train Row')
plt.legend()
plt.title('Training Dimension Histogram', fontsize=15)
plt.subplot(122)
sns.kdeplot(image_info_df["col"], label='Train Col')
sns.kdeplot(image_info_df["row"], label='Train Row')
plt.legend()
plt.title('Train Dimension KDE Plot', fontsize=15)
plt.tight_layout()
plt.show()
# Save results
# If saved on VM, need to copy to storage
result_path = os.path.join(parms.ROOT_PATH, "image-info.pkl")
image_info_df.to_pickle(result_path)
image_info_df["c-r"] = image_info_df["col"] - image_info_df["row"]
image_info_df.head()
#print(np.count_nonzero(a < 4))
c_r = image_info_df["c-r"].values.tolist()
c_r_np = np.array(c_r)
print(" == 0, ", np.count_nonzero(c_r_np == 0))
print(" < 0, ", np.count_nonzero(c_r_np < 0))
print(" > 0, ", np.count_nonzero(c_r_np > 0))
print(" > 500, ", np.count_nonzero(c_r_np > 500))
print("> 1000, ", np.count_nonzero(c_r_np > 1000))
# = 0 - 974, < 0 - none, >0 - 2688, >500 - 2340, >1000 462,
# open and read saved file
image_info_df = pd.read_pickle(result_path)
image_info_df.head()
###Output
_____no_output_____ |
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is coverered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables autodifferentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Decribing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that take tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, an a "discriminator" model, a classifierthat can tell the difference between real imagees (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * batch_size))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * 64))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2.0 is[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loopabove executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Writing a training loop from scratch View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Setup
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionKeras provides default training and evaluation loops, `fit()` and `evaluate()`.Their usage is covered in the guide[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).If you want to customize the learning algorithm of your model while still leveragingthe convenience of `fit()`(for instance, to train a GAN using `fit()`), you can subclass the `Model` class andimplement your own `train_step()` method, whichis called repeatedly during `fit()`. This is covered in the guide[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).Now, if you want very low-level control over training & evaluation, you should writeyour own training & evaluation loops from scratch. This is what this guide is about. Using the `GradientTape`: a first end-to-end exampleCalling a model inside a `GradientTape` scope enables you to retrieve the gradients ofthe trainable weights of the layer with respect to a loss value. Using an optimizerinstance, you can use these gradients to update these variables (which you canretrieve using `model.trainable_weights`).Let's consider a simple MNIST model:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu")(inputs)
x2 = layers.Dense(64, activation="relu")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Let's train it using mini-batch gradient with a custom training loop.First, we're going to need an optimizer, a loss function, and a dataset:
###Code
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the training dataset.
batch_size = 64
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784))
x_test = np.reshape(x_test, (-1, 784))
# Reserve 10,000 samples for validation.
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
###Output
_____no_output_____
###Markdown
Here's our training loop:- We open a `for` loop that iterates over epochs- For each epoch, we open a `for` loop that iterates over the dataset, in batches- For each batch, we open a `GradientTape()` scope- Inside this scope, we call the model (forward pass) and compute the loss- Outside the scope, we retrieve the gradients of the weightsof the model with regard to the loss- Finally, we use the optimizer to update the weights of the model based on thegradients
###Code
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits = model(x_batch_train, training=True) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value = loss_fn(y_batch_train, logits)
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, model.trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %s samples" % ((step + 1) * batch_size))
###Output
_____no_output_____
###Markdown
Low-level handling of metricsLet's add metrics monitoring to this basic loop.You can readily reuse the built-in metrics (or custom ones you wrote) in such trainingloops written from scratch. Here's the flow:- Instantiate the metric at the start of the loop- Call `metric.update_state()` after each batch- Call `metric.result()` when you need to display the current value of the metric- Call `metric.reset_states()` when you need to clear the state of the metric(typically at the end of an epoch)Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data atthe end of each epoch:
###Code
# Get model
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
###Output
_____no_output_____
###Markdown
Here's our training & evaluation loop:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Speeding-up your training step with `tf.function`The default runtime in TensorFlow 2 is[eager execution](https://www.tensorflow.org/guide/eager).As such, our training loop above executes eagerly.This is great for debugging, but graph compilation has a definite performanceadvantage. Describing your computation as a static graph enables the frameworkto apply global performance optimizations. This is impossible whenthe framework is constrained to greedly execute one operation after another,with no knowledge of what comes next.You can compile into a static graph any function that takes tensors as input.Just add a `@tf.function` decorator on it, like this:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
Let's do the same with the evaluation step:
###Code
@tf.function
def test_step(x, y):
val_logits = model(x, training=False)
val_acc_metric.update_state(y, val_logits)
###Output
_____no_output_____
###Markdown
Now, let's re-run our training loop with this compiled training step:
###Code
import time
epochs = 2
for epoch in range(epochs):
print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(x_batch_train, y_batch_train)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * batch_size))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
test_step(x_batch_val, y_batch_val)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
print("Time taken: %.2fs" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Much faster, isn't it? Low-level handling of losses tracked by the modelLayers & models recursively track any losses created during the forward passby layers that call `self.add_loss(value)`. The resulting list of scalar lossvalues are available via the property `model.losses`at the end of the forward pass.If you want to be using these loss components, you should sum themand add them to the main loss in your training step.Consider this layer, that creates an activity regularization loss:
###Code
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
###Output
_____no_output_____
###Markdown
Let's build a really simple model that uses it:
###Code
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Here's what our training step should look like now:
###Code
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
logits = model(x, training=True)
loss_value = loss_fn(y, logits)
# Add any extra losses created during the forward pass.
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y, logits)
return loss_value
###Output
_____no_output_____
###Markdown
SummaryNow you know everything there is to know about using built-in training loops andwriting your own from scratch.To conclude, here's a simple end-to-end example that ties together everythingyou've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratchYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate newimages that look almost real, by learning the latent distribution of a trainingdataset of images (the "latent space" of the images).A GAN is made of two parts: a "generator" model that maps points in the latentspace to points in image space, a "discriminator" model, a classifierthat can tell the difference between real images (from the training dataset)and fake images (the output of the generator network).A GAN training loop looks like this:1) Train the discriminator.- Sample a batch of random points in the latent space.- Turn the points into fake images via the "generator" model.- Get a batch of real images and combine them with the generated images.- Train the "discriminator" model to classify generated vs. real images.2) Train the generator.- Sample random points in the latent space.- Turn the points into fake images via the "generator" network.- Get a batch of real images and combine them with the generated images.- Train the "generator" model to "fool" the discriminator and classify the fake imagesas real.For a much more detailed overview of how GANs works, see[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).Let's implement this training loop. First, create the discriminator meant to classifyfake vs real digits:
###Code
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
discriminator.summary()
###Output
_____no_output_____
###Markdown
Then let's create a generator network,that turns latent vectors into outputs of shape `(28, 28, 1)` (representingMNIST digits):
###Code
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
###Output
_____no_output_____
###Markdown
Here's the key bit: the training loop. As you can see it is quite straightforward. Thetraining step function only takes 17 lines.
###Code
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer = keras.optimizers.Adam(learning_rate=0.0003)
g_optimizer = keras.optimizers.Adam(learning_rate=0.0004)
# Instantiate a loss function.
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
@tf.function
def train_step(real_images):
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Decode them to fake images
generated_images = generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(labels.shape)
# Train the discriminator
with tf.GradientTape() as tape:
predictions = discriminator(combined_images)
d_loss = loss_fn(labels, predictions)
grads = tape.gradient(d_loss, discriminator.trainable_weights)
d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = discriminator(generator(random_latent_vectors))
g_loss = loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, generator.trainable_weights)
g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))
return d_loss, g_loss, generated_images
###Output
_____no_output_____
###Markdown
Let's train our GAN, by repeatedly calling `train_step` on batches of images.Since our discriminator and generator are convnets, you're going to want torun this code on a GPU.
###Code
import os
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir = "./"
for epoch in range(epochs):
print("\nStart epoch", epoch)
for step, real_images in enumerate(dataset):
# Train the discriminator & generator on one batch of real images.
d_loss, g_loss, generated_images = train_step(real_images)
# Logging.
if step % 200 == 0:
# Print metrics
print("discriminator loss at step %d: %.2f" % (step, d_loss))
print("adversarial loss at step %d: %.2f" % (step, g_loss))
# Save one generated image
img = tf.keras.preprocessing.image.array_to_img(
generated_images[0] * 255.0, scale=False
)
img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png"))
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if step > 10:
break
###Output
_____no_output_____ |
examples/inference/nbtests/ex_SppQ_latent_inference.ipynb | ###Markdown
Simulating any compartmental model with testing and quarantine using the `SppQ` class
###Code
%matplotlib inline
import numpy as np
import pyross
import matplotlib.pyplot as plt
#from matplotlib import rc; rc('text', usetex=True)
###Output
_____no_output_____
###Markdown
The SIR model with quarantineBelow you will find the model-specification dictionary for the SIR model with quarantined states
###Code
model_spec = {
"classes" : ["S", "I"],
"S" : {
"infection" : [ ["I", "-beta"] ],
},
"I" : {
"linear" : [ ["I", "-gamma"] ],
"infection" : [ ["I", "beta"] ],
},
# S I R
"test_pos" : [ "p_falsepos", "p_truepos", "p_falsepos"] ,
"test_freq" : [ "pi_RS", "pi_I", "pi_RS"]
}
parameters = {
'beta' : 0.02,
'gamma' : 0.1,
'p_falsepos' : 0.01,
'p_truepos' : 0.9,
'pi_RS' : 0.1,
'pi_I' : 1
}
###Output
_____no_output_____
###Markdown
This corresponds to$$\begin{aligned}\dot{S}_i & = - \beta \sum_j C_{ij} \frac{I_j}{N_j} S_i - \tau_S S_i; &\dot{S}^Q_i & = \tau_S S_i \\\dot{I}_i & = \beta \sum_j C_{ij} \frac{I_j}{N_j} S_i - \gamma I_i - \tau_I I_i;&\dot{I}^Q_i & =- \gamma I_i^Q+ \tau_I I_i\\\dot{R}_i & = \gamma I_i- \tau_R R_i; &\dot{R}^Q_i & = \gamma I^Q_i+ \tau_R R_i;\end{aligned}$$Each of the classes, `S`, `I` and `R`, have a quarantined version, `SQ`, `IQ` and `RQ`. The dynamics within the quarantined states is the same as for the un-quarantined states, but there are no infection terms (assuming perfect quarantine). Individuals are quarantined upon testing positive, hence the total number $N^Q=S^Q+I^Q+R^Q$ would be the reported number of confirmed cases. The transition rates $\tau_S$, $\tau_I$, $\tau_R$ for irreversible transitions to the quarantined states are dependent on time and on other variables. They are determined by the overall testing rate $\tau_{tot}(t)$ and the parameters specified in `"test_pos"` and `"test_prob"` (ordered such that they match to `S`, `I` and `R`). - `"test_pos"` specifies the probability $\kappa_S$, $\kappa_I$, $\kappa_R$ that a test performed on an individual of a given class is positive. For classes $R$ and $S$, this is the conditional probability of false positives, for class $I$ the conditional probability of a true positive- `"test_freq"` characterises the frequency $\pi_S$, $\pi_I$, $\pi_R$ of tests in a given class. The absolute values of these values does not matter, only their relative magnitudes. If we consider symptomatic testing and set $\pi_I=1$, then $\pi_R=\pi_S$ is the fraction of people who would like to be tested because of symptoms of flu or cold among the population *not* infected with SARS-CoV-2. In models with several infected classes, this parameter can also be used to prioritise testing of patients with severe symptoms or elderly people- The rate of positive tests in each class is computed as $$ \tau_X=\tau_{tot}(t)\pi_X \kappa_X/\mathcal{N} $$ for $X\in\{S,I,R\}$ with the normalisation constant $$ \mathcal{N}=\sum_X \pi_X X$$ Next, we define the initial condition for all non-quarantined and quarantined states. $R$ is never specified but calculated from the total number. The initial value for $N^Q$ is specified for the auxiliary class `NiQ`. The (scalar) testing rate $\tau_{tot}(t)$ is specified as a Python function, similar to the time dependent contact matrix. Here, we specify a rapid increase from 10 to 100 tests per day around day 40.
###Code
M = 2 # the population has two age groups
N = 5e4 # and this is the total population
# set the age structure
fi = np.array([0.25, 0.75]) # fraction of population in age age group
Ni = N*fi
# set the contact structure
CM = np.array([[18., 9.], [3., 12.]])
# Initial conditions as an array
S0 = np.array([Ni[0]-10, Ni[1]-10])
I0 = np.array([10, 10])
x0 = np.array([
S0[0], S0[1], # S
I0[0], I0[1], # I
0, 0, # R
0, 0, # SQ
0, 0, # IQ
0, 0 # RQ
])
def contactMatrix(t):
return CM
# Tests performed per day
def testRate(t):
return (900.*(1.+np.tanh((t-30.)/10.))/2.+100.)
# duration of simulation and data file
Tf = 100; Nf=Tf+1;
model = pyross.stochastic.SppQ(model_spec, parameters, M, Ni)
data = model.simulate(x0, contactMatrix, testRate, Tf, Nf)
data_array = data['X']
det_model = pyross.deterministic.SppQ(model_spec, parameters, M, Ni)
data_det = det_model.simulate(x0, contactMatrix, testRate, Tf, Nf)
data_array_data = data_det['X']
# non-quarantined version for comarpison
model_specU = model_spec.copy()
model_specU.pop('test_freq')
model_specU.pop('test_pos')
modelU = pyross.stochastic.Spp(model_specU, parameters, M, Ni)
dataU = modelU.simulate(x0[0:(2*M)], contactMatrix, Tf, Nf)
# plot the data and obtain the epidemic curve
S = np.sum(model.model_class_data('S', data), axis=1)
I = np.sum(model.model_class_data('I', data), axis=1)
R = np.sum(model.model_class_data('R', data), axis=1)
SQ = np.sum(model.model_class_data('SQ', data), axis=1)
IQ = np.sum(model.model_class_data('IQ', data), axis=1)
RQ = np.sum(model.model_class_data('RQ', data), axis=1)
NQ = np.sum(model.model_class_data('NiQ', data), axis=1)
NQ_det = np.sum(det_model.model_class_data('NiQ', data_det), axis=1)
SU = np.sum(modelU.model_class_data('S', dataU), axis=1)
IU = np.sum(modelU.model_class_data('I', dataU), axis=1)
RU = np.sum(modelU.model_class_data('R', dataU), axis=1)
t = data['t']
fig = plt.figure(num=None, figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 18})
plt.subplot(1, 2, 1)
plt.plot(t, S, '-', color="#348ABD", label='$S$', lw=3)
plt.plot(t, I, '-', color='#A60628', label='$I$', lw=3)
plt.plot(t, R, '-', color="dimgrey", label='$R$', lw=3)
plt.plot(t, SU, '--', color="#348ABD", label='$S$ (w/o $Q$)', lw=2)
plt.plot(t, IU, '--', color='#A60628', label='$I$ (w/o $Q$)', lw=2)
plt.plot(t, RU, '--', color="dimgrey", label='$R$ (w/o $Q$)', lw=2)
plt.legend(fontsize=18); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.ylabel('Compartment value')
plt.xlabel('Days');
plt.subplot(1, 2, 2)
tm = t[1:]-0.5
plt.plot(tm, [testRate(tt) for tt in tm], '-', color="darkgreen", label='daily total tests', lw=3)
plt.plot(tm, np.diff(NQ), '-', color="#348ABD", label='daily positive tests', lw=3)
plt.plot(tm, np.diff(NQ_det), '--', color="#348ABD", label='daily positive tests (deterministic)', lw=2)
plt.plot(tm, np.diff(I+R+IQ+RQ), '-', color="#A60628", label='true new cases', lw=3)
plt.legend(fontsize=18); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.ylabel('Compartment value')
plt.xlabel('Days');
###Output
_____no_output_____
###Markdown
In this simple model, testing and quarantining has helped to eradicate the disease. It is interesting to evaluate how many infections have been confired anf how many have remained unconfirmed:
###Code
print("Confirmed cases:", int(RQ[-1]))
print("Confirmed cases (incl. false positives):", int(NQ[-1]))
print("Total cases:", int(R[-1]+RQ[-1]))
# load the data and rescale to intensive variables
Tf_inference = 30 # truncate to only getting the first few datapoints
Nf_inference = Tf_inference+1
x = (data_array[:Nf_inference]).astype('float')/N
inference_parameters = parameters.copy()
# a filter that sums over all the diagnosed people for each age group
fltr = np.kron([0, 0, 0, 1, 1, 1],np.identity(M))
print(fltr)
# Compare the deterministic trajectory and the stochastic trajectory with the same
# initial conditions and parameters
obs=np.einsum('ij,kj->ki', fltr, x)
x0=x[0]
# initialise the estimator
estimator = pyross.inference.SppQ(model_spec, inference_parameters, testRate, M, fi, Omega=N, lyapunov_method='euler')
# compute -log_p for the original (correct) parameters
logp = estimator.minus_logp_red(inference_parameters, x0, obs, fltr, Tf_inference, contactMatrix)
print(logp)
x0_old = np.array([
S0[0], S0[1], # S
I0[0], I0[1], # I
0, 0, # SQ
0, 0, # IQ
0, 0 # NiQ
])/N
# a filter that sums over all the diagnosed people for each age group
fltr_old = np.kron([0, 0, 0, 0, 1],np.identity(M))
print(fltr_old)
# initialise the estimator
estimator_old = pyross.inference.SppQ_old(model_spec, inference_parameters, testRate, M, fi, Omega=N, lyapunov_method='euler')
# compute -log_p for the original (correct) parameters
logp = estimator_old.minus_logp_red(inference_parameters, x0_old, obs, fltr_old, Tf_inference, contactMatrix)
print(logp)
# make parameter guesses and set up bounds for each parameter
eps=1e-4
param_priors = {
'beta':{
'mean': 0.015,
'std': 0.015,
'bounds': (eps, 0.1)
},
'gamma':{
'mean': 0.08,
'std': 0.1,
'bounds': (eps, 1)
}
}
# set up filter for initial conditions because they are constraint by the observed
# note that this filter is different from the bulk of the trajectory,
# because we know the initial value 0 holds for R all quarantined age groups
obs0 = np.zeros(M*4)
obs0[:M] = fi
fltr0 = np.kron(([[1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1]
]), np.identity(M))
init_fltr = np.repeat([True, True, False, False, False, False], M)
full_obs = np.array([obs0, *obs[1:]])
full_fltrs = np.array([fltr0, *([fltr]*(Nf_inference-1))])
I0_g = (I0)/N
I_std = I0_g
bounds_for_I = np.tile([0.1/N, 100/N], M).reshape(M, 2)
S0_g = (S0)/N
S_std = I_std*3
bounds_for_S = np.array([(1/N, f) for f in fi])
init_priors = {
'independent':{
'fltr':init_fltr,
'mean':[*S0_g, *I0_g],
'std': [*S_std, *I_std],
'bounds': [*bounds_for_S, *bounds_for_I]
}
}
# optimisation parameters
ftol = 1e-5 # the relative tol in (-logp)
res = estimator.latent_infer_parameters(full_obs, full_fltrs, Tf_inference, contactMatrix,
param_priors, init_priors,
global_max_iter=30, global_atol=10,
verbose=True, ftol=ftol)
print("True parameters:")
print(inference_parameters)
print("\nInferred parameters:")
best_estimates = res['map_params_dict']
print(best_estimates)
print('\n True initial conditions: ')
print((x0*N).astype('int'))
map_x0 = res['map_x0']
print('\n Inferred initial conditions: ')
print((map_x0*N).astype('int'))
# compute -log_p for the original (correct) parameters
logp = estimator.minus_logp_red(inference_parameters, map_x0, obs, fltr, Tf_inference, contactMatrix)
print(logp)
# plot the guessed trajectory and the true trajectory
estimator.set_params(best_estimates)
estimator.set_det_model(best_estimates)
x_det = estimator.integrate(map_x0, 0, Tf, Nf)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
plt.plot(S/N, label='True S', ls='--', c='C1')
plt.plot(np.sum(x_det[:, 0*M:1*M],axis=1), label='Inferred S', c='C1')
plt.plot(I/N, label='True I', ls='--', c='C2')
plt.plot(np.sum(x_det[:, 1*M:2*M],axis=1), label='Inferred I', c='C2')
plt.plot(NQ/N, label='True Q ($S^Q+I^Q+R^Q$)', ls='--', c='C3')
plt.plot(np.sum(x_det[:, 3*M:6*M],axis=1), label='Inferred Q', c='C3')
plt.axvspan(0, Tf_inference,
label='Used for inference',
alpha=0.3, color='dodgerblue')
plt.xlim([0, Tf])
plt.legend(fontsize=18); plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.xlabel("time [days]")
plt.show()
###Output
_____no_output_____ |
jump2digital-on/xn-pruebas.ipynb | ###Markdown
Número de nuevos usarios por mes.Dinero total solicitado cada mes.Dinero medio solicitado por cada usuario al mes.Porcentaje de las solicitudes que son aceptadas cada mes.Número de solicitudes al mes que son aceptadas pero NO son devueltas.Número de solicitudes al mes que son aceptadas y SÍ son devueltas.Tiempo medio en devolver un préstamo (MM:DD:HH).Cantidad de dinero que se presta cada mes.Predecir la cantidad de dinero que se solicitara en Octubre.
###Code
df.info()
mask = (df.mb_date.isnull())
(df[~mask].groupby.mb_date - df[~mask].date).mean()
mask = (df.sa_date.isnull())
(df[~mask].sa_date - df[~mask].date).mean()
df.user_id.value_counts(dropna=False)
mask = df.status == 'approved'
df[mask].groupby(by=pd.Grouper(key='date', freq='M'), dropna=False).sum()
df.shape
data = df.groupby(grupomes).amount.sum().reset_index()
sns.lineplot(data=data, x='date', y='amount')
mask = df.user_id.isnull()
df[mask].id.sum()
df.groupby(grupomes,dropna=False).user_id.count()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16105 entries, 0 to 16104
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 16105 non-null int64
1 amount 16105 non-null float64
2 status 16105 non-null object
3 created_at 16105 non-null object
4 user_id 14218 non-null float64
5 money_back_date 9498 non-null object
6 transfer_type 16105 non-null object
7 send_at 8776 non-null object
8 date 16105 non-null datetime64[ns, UTC]
9 mb_date 9498 non-null datetime64[ns, UTC]
dtypes: datetime64[ns, UTC](2), float64(2), int64(1), object(5)
memory usage: 1.2+ MB
###Markdown
Cantidad de dinero que se presta cada mes.
###Code
mask=
df.groupby(grupomes).amount.sum()
df.status == 'rejected'
###Output
_____no_output_____ |
module2_Python/day2/ucm_uso_numpy.ipynb | ###Markdown
Aplicación de NumPy----------------------- Ejemplo 1Los datos contenidos en el fichero [datos.txt](./datos/datos.txt) describe las poblaciones de tres tipos de serpientes en el sur de Africa durante 20 años.Cada columna comprende una especie y cada fila un año.
###Code
import numpy as np
data = np.loadtxt('./datos/datos.txt') # carga de los datos
data
###Output
_____no_output_____
###Markdown
* Cálculo de la media de población de cada tipo a lo largo del tiempo.
###Code
# Sol:
np.mean(data[:,1:], axis = 0) # elimino la primera columna de los anos
###Output
_____no_output_____
###Markdown
* Cálculo de la desviación estándar de la muestra
###Code
# Sol:
np.std(data[:,1:])
###Output
_____no_output_____
###Markdown
* ¿Qué especie ha tenido más población por año?
###Code
# Sol:
max_per_year = np.array([np.argmax(data[:,1:], axis =1), np.max(data[:,1:], axis = 1)]).T
max_per_year
max_year_specie = max_per_year[np.argmax(max_per_year[:,-1])]
print('La especie con mas poblacion por ano es la numero', max_year_specie[0], ' con poblacion', max_year_specie[1])
print('El numero de veces que la especie 0 supera a los demas es', (max_per_year[:,0] == 0).sum())
###Output
El numero de veces que la especie 0 supera a los demas es 5
|
exercises/recurrent-neural-networks/time-series/Simple_RNN.ipynb | ###Markdown
Simple RNNIn ths notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!> * First, we'll create our data* Then, define an RNN in PyTorch* Finally, we'll train our network and see how it performs Import resources and create data
###Code
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,5))
# how many time steps/data pts are in one batch of data
seq_length = 20
# generate evenly spaced data pts
time_steps = np.linspace(0, np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension
x = data[:-1] # all but the last piece of data
y = data[1:] # all but the first
# display the data
plt.plot(time_steps[1:], x, 'r.', label='input, x') # x
plt.plot(time_steps[1:], y, 'b.', label='target, y') # y
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
--- Define the RNNNext, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:* **input_size** - the size of the input* **hidden_dim** - the number of features in the RNN output and in the hidden state* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)Take a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.htmlrnn) to read more about recurrent layers.
###Code
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
# define an RNN with specified parameters
# batch_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# last, fully-connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x, hidden):
# x (batch_size, seq_length, input_size)
# hidden (n_layers, batch_size, hidden_dim)
# r_out (batch_size, time_step, hidden_size)
batch_size = x.size(0)
# get RNN outputs
r_out, hidden = self.rnn(x, hidden)
# shape output to be (batch_size*seq_length, hidden_dim)
r_out = r_out.view(-1, self.hidden_dim)
# get final output
output = self.fc(r_out)
return output, hidden
###Output
_____no_output_____
###Markdown
Check the input and output dimensionsAs a check that your model is working as expected, test out how it responds to input data.
###Code
# test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size())
###Output
Input size: torch.Size([1, 20, 1])
Output size: torch.Size([20, 1])
Hidden state size: torch.Size([2, 1, 10])
###Markdown
--- Training the RNNNext, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.
###Code
# decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn)
###Output
RNN(
(rnn): RNN(1, 32, batch_first=True)
(fc): Linear(in_features=32, out_features=1, bias=True)
)
###Markdown
Loss and OptimizationThis is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.* It's typical to use an Adam optimizer for recurrent models.
###Code
# MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
Defining the training functionThis function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often. Hidden StatePay close attention to the hidden state, here:* Before looping over a batch of training data, the hidden state is initialized* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps
###Code
# train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i%print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every)
###Output
Loss: 0.34128743410110474
###Markdown
Time-Series PredictionTime-series prediction can be applied to many tasks. Think about weather forecasting or predicting the ebb and flow of stock market prices. You can even try to generate predictions much further in the future than just one time step!
###Code
###Output
_____no_output_____ |
examples/distributions_conditionalupdate.ipynb | ###Markdown
Conditional Distributions Update authors:Jacob Schreiber [[email protected]]Nicholas Farn [[email protected]] This example shows the implementation of the classic Monty Hall problem.
###Code
from pomegranate import *
import numpy as np
###Output
_____no_output_____
###Markdown
Lets create the distributions for the guest's choice and prize's location. They are both discrete distributions and are independent of one another.
###Code
guest = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
prize = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
###Output
_____no_output_____
###Markdown
Now we'll create a conditional probability table for the Monty Hall problem. The results of the Monty Hall problem is dependent on both the guest and the prize.
###Code
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize] )
###Output
_____no_output_____
###Markdown
Let's create some sample data to train our model.
###Code
data = [[ 'A', 'A', 'C' ],
[ 'A', 'A', 'C' ],
[ 'A', 'A', 'B' ],
[ 'A', 'A', 'A' ],
[ 'A', 'A', 'C' ],
[ 'B', 'B', 'B' ],
[ 'B', 'B', 'C' ],
[ 'C', 'C', 'A' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'B', 'A' ]]
###Output
_____no_output_____
###Markdown
Then train our model and see the results.
###Code
monty.fit( data, weights=[1, 1, 3, 3, 1, 1, 3, 7, 1, 1, 1, 1] )
print monty
###Output
C C C 0.3
C C B 0.0
C C A 0.7
C B C 0.0
C B B 0.0
C B A 1.0
C A C 0.333333333333
C A B 0.333333333333
C A A 0.333333333333
B C C 0.333333333333
B C B 0.333333333333
B C A 0.333333333333
B B C 0.75
B B B 0.25
B B A 0.0
B A C 0.333333333333
B A B 0.333333333333
B A A 0.333333333333
A C C 0.333333333333
A C B 0.333333333333
A C A 0.333333333333
A B C 0.333333333333
A B B 0.333333333333
A B A 0.333333333333
A A C 0.333333333333
A A B 0.333333333333
A A A 0.333333333333
###Markdown
Conditional Distributions Update authors:Jacob Schreiber [[email protected]]Nicholas Farn [[email protected]] This example shows the implementation of the classic Monty Hall problem.
###Code
from pomegranate import *
import numpy as np
###Output
_____no_output_____
###Markdown
Lets create the distributions for the guest's choice and prize's location. They are both discrete distributions and are independent of one another.
###Code
guest = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
prize = DiscreteDistribution( { 'A': 1./3, 'B': 1./3, 'C': 1./3 } )
###Output
_____no_output_____
###Markdown
Now we'll create a conditional probability table for the Monty Hall problem. The results of the Monty Hall problem is dependent on both the guest and the prize.
###Code
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize] )
###Output
_____no_output_____
###Markdown
Let's create some sample data to train our model.
###Code
data = [[ 'A', 'A', 'C' ],
[ 'A', 'A', 'C' ],
[ 'A', 'A', 'B' ],
[ 'A', 'A', 'A' ],
[ 'A', 'A', 'C' ],
[ 'B', 'B', 'B' ],
[ 'B', 'B', 'C' ],
[ 'C', 'C', 'A' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'C', 'C' ],
[ 'C', 'B', 'A' ]]
###Output
_____no_output_____
###Markdown
Then train our model and see the results.
###Code
monty.fit( data, weights=[1, 1, 3, 3, 1, 1, 3, 7, 1, 1, 1, 1] )
print(monty)
###Output
A A A 0.3333333333333333
A A B 0.3333333333333333
A A C 0.3333333333333333
A B A 0.3333333333333333
A B B 0.3333333333333333
A B C 0.3333333333333333
A C A 0.3333333333333333
A C B 0.3333333333333333
A C C 0.3333333333333333
B A A 0.3333333333333333
B A B 0.3333333333333333
B A C 0.3333333333333333
B B A 0.0
B B B 0.25
B B C 0.75
B C A 0.3333333333333333
B C B 0.3333333333333333
B C C 0.3333333333333333
C A A 0.3333333333333333
C A B 0.3333333333333333
C A C 0.3333333333333333
C B A 1.0
C B B 0.0
C B C 0.0
C C A 0.7
C C B 0.0
C C C 0.3
|
notebooks/demo_SVAGP-2d-gam.ipynb | ###Markdown
Additive Regression $$f_c \sim \cal{GP}(0,k_c),\; \forall c \in [1..C]$$$$y^{(n)}|f_1...f_C,x^{(n)} = y^{(n)}|\sum_c f_c(x^{(n)}_c)$$Functions $f_c$ are all functions of separate covariates $x_c$
###Code
import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
tf.logging.set_verbosity(tf.logging.ERROR)
from tensorflow.contrib.opt import ScipyOptimizerInterface as soi
np.random.seed(10)
###Output
_____no_output_____
###Markdown
Simulating synthetic data
###Code
lik = 'Poisson'
#lik = 'Gaussian'
assert lik in ['Poisson','Gaussian']
#---------------------------------------------------
# Declaring additive GP model parameters
N =500
D = 4 # covariates dimension
R = 1 # number of trials
f_indices = [[0],[1,2],[3]]
C = len(f_indices) # number of latent functions
scale = 2.
fs = [lambda x:np.sin(x)**3*scale,
lambda x: (np.sin(x[:,0])*np.sin(x[:,1])).reshape(-1,1)*scale ,
lambda x:np.cos(x)*scale]
#---------------------------------------------------
# Simulating data
xmin,xmax=-3,3
X_np = np.random.uniform(xmin,xmax,(N,D))
F_np = np.hstack([fs[d](X_np[:,f_indices[d]]) for d in range(C)])
pred_np = np.sum(F_np,axis=1,keepdims=True)
if lik == 'Gaussian':
Y_np = pred_np + np.random.randn(N,R)*.5
elif lik=='Poisson':
link = np.exp
rate = np.tile(link(pred_np),[1,R])
Y_np = np.random.poisson(rate,size=(N,R))
colors_c = plt.cm.winter(np.linspace(0,1,C))
fig,ax = plt.subplots(1,C,figsize=(C*4,4))
for c in range(C):
i = f_indices[c]
if len(f_indices[c])==1:
o = np.argsort(X_np[:,f_indices[c]],0)
ax[c].plot(X_np[o,i],F_np[o,c],'-',color=colors_c[c])
ax[c].set_xlabel('$x_%d$'%i[0],fontsize=20)
ax[c].set_ylabel('$f_%d(x_%d)$'%(i[0],i[0]),fontsize=20)
elif len(f_indices[c])==2:
ax[c].scatter(X_np[:,i[0]],
X_np[:,i[1]],
c=F_np[:,c],linewidth=0)
ax[c].set_xlabel('$x_%d$'%i[0],fontsize=20)
ax[c].set_ylabel('$x_%d$'%i[1],fontsize=20)
ax[c].set_title('$f(x_%d,x_%d)$'%(i[0],i[1]),fontsize=20)
plt.suptitle('True underlying functions',y=1.05,fontsize=20)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Constructing tensorflow model
###Code
import sys
sys.path.append('../SVAGP')
from kernels import RBF
from likelihoods import Gaussian, Poisson, Gaussian_with_link
from settings import np_float_type,int_type
from model import SVAGP
#---------------------------------------------------
# Constructing tensorflow model
X = tf.placeholder(tf.float32,[N,D])
Y = tf.placeholder(tf.float32,[N,R])
ks,Zs = [],[]
ks =[]
with tf.variable_scope("kernels") as scope:
for c in range(C):
with tf.variable_scope("kernel%d"%c) as scope:
input_dim = len(f_indices[c])
ks.append( RBF(input_dim,lengthscales=.5*np.ones(input_dim), variance=1.))
with tf.variable_scope("likelihood") as scope:
if lik=='Gaussian':
likelihood = Gaussian(variance=1)
elif lik == 'Poisson':
likelihood = Poisson()
with tf.variable_scope("ind_points") as scope:
for c in range(C):
with tf.variable_scope("ind_points%d"%c) as scope:
input_dim = len(f_indices[c])
Z_ = np.random.uniform(xmin,xmax,[20,input_dim]).astype(np_float_type)
Zs.append( tf.Variable(Z_,tf.float32,name='Z') )
with tf.variable_scope("model") as scope:
m= SVAGP(X,Y,ks,likelihood,Zs,q_diag=True,f_indices=f_indices)
###Output
_____no_output_____
###Markdown
Running inference and learning
###Code
sess = tf.Session()
sess.run(tf.global_variables_initializer()) # reset values to wrong
# declare loss
loss = -m.build_likelihood()
# separate variables
vars_e, vars_m, vars_h, vars_z= [], [], [], []
vars_e += tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='model/inference')
if lik=='Gaussian':
vars_m += tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='likelihood')
vars_z += tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='ind_points')
vars_h += tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='kernels')
# declare optimizers
opt_e = soi(loss, var_list=vars_e, method='L-BFGS-B', options={'ftol': 1e-4})
opt_m = soi(loss, var_list=vars_m, method='L-BFGS-B', options={'ftol': 1e-4})
opt_z = soi(loss, var_list=vars_z, method='L-BFGS-B', options={'ftol': 1e-4})
opt_h = soi(loss, var_list=vars_h, method='L-BFGS-B', options={'ftol': 1e-2})
init = tf.global_variables_initializer()
sess.run(init) # reset values to wrong
feed_dic = {Y:Y_np, X:X_np}
#---------------------------------------------------
print('Optimized variables:')
for var in vars_e+vars_z+vars_h:
print(var.name) # Prints the name of the variable alongside its val
nit = 30
loss_array = np.zeros((nit,))
# declare which optimization to perform
OPT = ['E','Z','H']
if lik=='Gaussian':
OPT.append('M')
# Optimization is performed using L-BFGS-B, iterating over different subsets of variable
# - E: inference (as in classical EM)
# - Z: update of inducing point locations
# - H: kernel hyperparameter optimization
print('Starting Optimization')
opt_e.minimize(sess, feed_dict=feed_dic)
if 'E' in OPT:
opt_e.minimize(sess, feed_dict=feed_dic)
if 'H' in OPT:
opt_h.minimize(sess, feed_dict=feed_dic)
for it in range( nit):
if 'E' in OPT:
opt_e.minimize(sess, feed_dict=feed_dic)
if 'M' in OPT:
opt_m.minimize(sess, feed_dict=feed_dic)
if 'Z' in OPT:
opt_z.minimize(sess, feed_dict=feed_dic)
if 'H' in OPT:
opt_z.minimize(sess, feed_dict=feed_dic)
loss_array[it]= float(sess.run(loss, feed_dic))
Fs_mean,Fs_var = sess.run(m.build_predict_fs(X), feed_dic)
pred_mean,pred_var = sess.run(m.build_predict_additive_predictor(X), feed_dic)
Zs = sess.run(m.Zs, feed_dic)
sess.close()
fig,axarr = plt.subplots(1,2,figsize=(8,4))
ax=axarr[0]
ax.plot(loss_array[:it], linewidth=3, color='blue')
ax.set_xlabel('iterations',fontsize=20)
ax.set_ylabel('Variational Objective',fontsize=20)
ax=axarr[1]
ax.errorbar(pred_mean,pred_np,yerr=np.sqrt(pred_var),
elinewidth = 1, fmt='.', color='blue', alpha=.1)
ax.plot(pred_mean,pred_np,'.',color='blue')
ax.plot([pred_mean.min(),pred_mean.max()],
[pred_mean.min(),pred_mean.max()],
'--',linewidth=2,color='grey')
ax.set_xlabel('True predictor',fontsize=20)
ax.set_ylabel('Predicted predictor',fontsize=20)
fig.tight_layout()
plt.show()
plt.close()
fig,ax = plt.subplots(1,C,figsize=(C*5,5))
for c in range(C):
i = f_indices[c]
if len(i)==1:
o = np.argsort(X_np[:,i],0)
f,s = Fs_mean[c,:,0],np.sqrt(Fs_var[c,:,0])
ax[c].vlines(Zs[c],ymin=f.min(),ymax=f.max(),alpha=.05,color=colors_c[c])
ax[c].plot(X_np[o,i],f[o],color=colors_c[c])
ax[c].fill_between(X_np[o,i].flatten(),
(f-s)[o].flatten(),
y2=(f+s)[o].flatten(),
alpha=.1,facecolor=colors_c[c])
ax[c].plot(X_np[o,i],F_np[o,c],'--',color=colors_c[c])
ax[c].set_xlabel('$x_%d$'%i[0],fontsize=20)
ax[c].set_ylabel('$f_%d(x_%d)$'%(i[0],i[0]),fontsize=20)
elif len(f_indices[c])==2:
ax[c].scatter(X_np[:,i[0]],
X_np[:,i[1]],
c=Fs_mean[c,:,0],linewidth=0)
ax[c].scatter(Zs[c][:,0],Zs[c][:,1],
c='r', marker=(5, 1))
ax[c].set_xlabel('$x_%d$'%i[0],fontsize=20)
ax[c].set_ylabel('$x_%d$'%i[1],fontsize=20)
ax[c].set_title('$f(x_%d,x_%d)$'%(i[0],i[1]),fontsize=20)
plt.suptitle('Inferred underlying functions',y=1.05,fontsize=20)
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
docs/tools/engine/tensor_plus_tensor.ipynb | ###Markdown
JIT Engine: Tensor + TensorThis example will go over how to compile MLIR code to a function callable from Python.The example MLIR code we’ll use here performs element-wise tensor addition.Let’s first import some necessary modules and generate an instance of our JIT engine.
###Code
import mlir_graphblas
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
###Output
_____no_output_____
###Markdown
We'll use the same set of passes to optimize and compile all of our examples below.
###Code
passes = [
"--linalg-bufferize",
"--func-bufferize",
"--tensor-bufferize",
"--tensor-constant-bufferize",
"--convert-linalg-to-loops",
"--finalizing-bufferize",
"--convert-scf-to-std",
"--convert-std-to-llvm",
]
###Output
_____no_output_____
###Markdown
Fixed-Size Tensor AdditionHere’s some MLIR code to add two 32-bit floating point tensors of with the shape 2x3.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_f32(%arga: tensor<2x3xf32>, %argb: tensor<2x3xf32>) -> tensor<2x3xf32> {
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<2x3xf32>, tensor<2x3xf32>)
outs(%arga: tensor<2x3xf32>) {
^bb(%a: f32, %b: f32, %s: f32):
%sum = addf %a, %b : f32
linalg.yield %sum : f32
} -> tensor<2x3xf32>
return %answer : tensor<2x3xf32>
}
"""
###Output
_____no_output_____
###Markdown
Let's compile our MLIR code.
###Code
engine.add(mlir_text, passes)
###Output
_____no_output_____
###Markdown
Let's try out our compiled function.
###Code
# grab our callable
matrix_add_f32 = engine.matrix_add_f32
# generate inputs
a = np.arange(6, dtype=np.float32).reshape([2, 3])
b = np.full([2, 3], 100, dtype=np.float32)
# generate output
result = matrix_add_f32(a, b)
result
###Output
_____no_output_____
###Markdown
Let's verify that our function works as expected.
###Code
np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Arbitrary-Size Tensor AdditionThe above example created a function to add two matrices of size 2x3. This function won't work if we want to add two matrices of size 4x5 or any other size.
###Code
a = np.arange(20, dtype=np.float32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.float32)
matrix_add_f32(a, b)
###Output
_____no_output_____
###Markdown
While it's nice that the JIT engine is able to detect that there's a size mismatch, it'd be nicer to have a function that can add two tensors of arbitrary size. We'll now show how to create such a function for matrix of 32-bit integers.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_i32(%arga: tensor<?x?xi32>, %argb: tensor<?x?xi32>) -> tensor<?x?xi32> {
// Find the max dimensions of both args
%c0 = constant 0 : index
%c1 = constant 1 : index
%arga_dim0 = dim %arga, %c0 : tensor<?x?xi32>
%arga_dim1 = dim %arga, %c1 : tensor<?x?xi32>
%argb_dim0 = dim %argb, %c0 : tensor<?x?xi32>
%argb_dim1 = dim %argb, %c1 : tensor<?x?xi32>
%dim0_gt = cmpi "ugt", %arga_dim0, %argb_dim0 : index
%dim1_gt = cmpi "ugt", %arga_dim1, %argb_dim1 : index
%output_dim0 = select %dim0_gt, %arga_dim0, %argb_dim0 : index
%output_dim1 = select %dim1_gt, %arga_dim1, %argb_dim1 : index
%output_memref = alloca(%output_dim0, %output_dim1) : memref<?x?xi32>
%output_tensor = tensor_load %output_memref : memref<?x?xi32>
// Perform addition
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<?x?xi32>, tensor<?x?xi32>)
outs(%output_tensor: tensor<?x?xi32>) {
^bb(%a: i32, %b: i32, %s: i32):
%sum = addi %a, %b : i32
linalg.yield %sum : i32
} -> tensor<?x?xi32>
return %answer : tensor<?x?xi32>
}
"""
###Output
_____no_output_____
###Markdown
The compilation of this MLIR code will be the same as our first example. The main difference is in how we wrote our MLIR code (notice the use of "?X?" when denoting the shapes of tensors).
###Code
# compile
engine.add(mlir_text, passes)
matrix_add_i32 = engine.matrix_add_i32
# generate inputs
a = np.arange(20, dtype=np.int32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result
assert np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Note that we get some level of safety regarding the tensor types as we get an exception if we pass in tensors with the wrong dtype.
###Code
matrix_add_i32(a, b.astype(np.int64))
###Output
_____no_output_____
###Markdown
Note that in the MLIR code, each of our output tensor's dimensions is the max of each dimension of our inputs. A consequence of this is that our function doesn't enforce that our inputs are the same shape.
###Code
# generate differently shaped inputs
a = np.arange(6, dtype=np.int32).reshape([2, 3])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result.shape
result
###Output
_____no_output_____
###Markdown
JIT Engine: Tensor + TensorThis example will go over how to compile MLIR code to a function callable from Python.The example MLIR code we’ll use here performs element-wise tensor addition.Let’s first import some necessary modules and generate an instance of our JIT engine.
###Code
import mlir_graphblas
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
###Output
_____no_output_____
###Markdown
We'll use the same set of passes to optimize and compile all of our examples below.
###Code
passes = [
"--graphblas-structuralize",
"--graphblas-optimize",
"--graphblas-lower",
"--sparsification",
"--sparse-tensor-conversion",
"--linalg-bufferize",
"--func-bufferize",
"--tensor-constant-bufferize",
"--tensor-bufferize",
"--finalizing-bufferize",
"--convert-linalg-to-loops",
"--convert-scf-to-std",
"--convert-memref-to-llvm",
"--convert-math-to-llvm",
"--convert-openmp-to-llvm",
"--convert-arith-to-llvm",
"--convert-math-to-llvm",
"--convert-std-to-llvm",
"--reconcile-unrealized-casts"
]
###Output
_____no_output_____
###Markdown
Fixed-Size Tensor AdditionHere’s some MLIR code to add two 32-bit floating point tensors of with the shape 2x3.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_f32(%arga: tensor<2x3xf32>, %argb: tensor<2x3xf32>) -> tensor<2x3xf32> {
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<2x3xf32>, tensor<2x3xf32>)
outs(%arga: tensor<2x3xf32>) {
^bb(%a: f32, %b: f32, %s: f32):
%sum = arith.addf %a, %b : f32
linalg.yield %sum : f32
} -> tensor<2x3xf32>
return %answer : tensor<2x3xf32>
}
"""
###Output
_____no_output_____
###Markdown
Let's compile our MLIR code.
###Code
engine.add(mlir_text, passes)
###Output
_____no_output_____
###Markdown
Let's try out our compiled function.
###Code
# grab our callable
matrix_add_f32 = engine.matrix_add_f32
# generate inputs
a = np.arange(6, dtype=np.float32).reshape([2, 3])
b = np.full([2, 3], 100, dtype=np.float32)
# generate output
result = matrix_add_f32(a, b)
result
###Output
_____no_output_____
###Markdown
Let's verify that our function works as expected.
###Code
np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Arbitrary-Size Tensor AdditionThe above example created a function to add two matrices of size 2x3. This function won't work if we want to add two matrices of size 4x5 or any other size.
###Code
a = np.arange(20, dtype=np.float32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.float32)
matrix_add_f32(a, b)
###Output
_____no_output_____
###Markdown
While it's nice that the JIT engine is able to detect that there's a size mismatch, it'd be nicer to have a function that can add two tensors of arbitrary size. We'll now show how to create such a function for matrix of 32-bit integers.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_i32(%arga: tensor<?x?xi32>, %argb: tensor<?x?xi32>) -> tensor<?x?xi32> {
// Find the max dimensions of both args
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%arga_dim0 = tensor.dim %arga, %c0 : tensor<?x?xi32>
%arga_dim1 = tensor.dim %arga, %c1 : tensor<?x?xi32>
%argb_dim0 = tensor.dim %argb, %c0 : tensor<?x?xi32>
%argb_dim1 = tensor.dim %argb, %c1 : tensor<?x?xi32>
%dim0_gt = arith.cmpi "ugt", %arga_dim0, %argb_dim0 : index
%dim1_gt = arith.cmpi "ugt", %arga_dim1, %argb_dim1 : index
%output_dim0 = std.select %dim0_gt, %arga_dim0, %argb_dim0 : index
%output_dim1 = std.select %dim1_gt, %arga_dim1, %argb_dim1 : index
%output_memref = memref.alloca(%output_dim0, %output_dim1) : memref<?x?xi32>
%output_tensor = memref.tensor_load %output_memref : memref<?x?xi32>
// Perform addition
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<?x?xi32>, tensor<?x?xi32>)
outs(%output_tensor: tensor<?x?xi32>) {
^bb(%a: i32, %b: i32, %s: i32):
%sum = arith.addi %a, %b : i32
linalg.yield %sum : i32
} -> tensor<?x?xi32>
return %answer : tensor<?x?xi32>
}
"""
###Output
_____no_output_____
###Markdown
The compilation of this MLIR code will be the same as our first example. The main difference is in how we wrote our MLIR code (notice the use of "?X?" when denoting the shapes of tensors).
###Code
# compile
engine.add(mlir_text, passes)
matrix_add_i32 = engine.matrix_add_i32
# generate inputs
a = np.arange(20, dtype=np.int32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result
assert np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Note that we get some level of safety regarding the tensor types as we get an exception if we pass in tensors with the wrong dtype.
###Code
matrix_add_i32(a, b.astype(np.int64))
###Output
_____no_output_____
###Markdown
Note that in the MLIR code, each of our output tensor's dimensions is the max of each dimension of our inputs. A consequence of this is that our function doesn't enforce that our inputs are the same shape.
###Code
# generate differently shaped inputs
a = np.arange(6, dtype=np.int32).reshape([2, 3])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result.shape
result
###Output
_____no_output_____
###Markdown
JIT Engine: Tensor + TensorThis example will go over how to compile MLIR code to a function callable from Python.The example MLIR code we’ll use here performs element-wise tensor addition.Let’s first import some necessary modules and generate an instance of our JIT engine.
###Code
import mlir_graphblas
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
###Output
_____no_output_____
###Markdown
We'll use the same set of passes to optimize and compile all of our examples below.
###Code
passes = [
"--linalg-bufferize",
"--func-bufferize",
"--tensor-bufferize",
"--tensor-constant-bufferize",
"--finalizing-bufferize",
"--convert-linalg-to-loops",
"--convert-scf-to-std",
"--convert-memref-to-llvm",
"--convert-std-to-llvm",
]
###Output
_____no_output_____
###Markdown
Fixed-Size Tensor AdditionHere’s some MLIR code to add two 32-bit floating point tensors of with the shape 2x3.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_f32(%arga: tensor<2x3xf32>, %argb: tensor<2x3xf32>) -> tensor<2x3xf32> {
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<2x3xf32>, tensor<2x3xf32>)
outs(%arga: tensor<2x3xf32>) {
^bb(%a: f32, %b: f32, %s: f32):
%sum = addf %a, %b : f32
linalg.yield %sum : f32
} -> tensor<2x3xf32>
return %answer : tensor<2x3xf32>
}
"""
###Output
_____no_output_____
###Markdown
Let's compile our MLIR code.
###Code
engine.add(mlir_text, passes)
###Output
_____no_output_____
###Markdown
Let's try out our compiled function.
###Code
# grab our callable
matrix_add_f32 = engine.matrix_add_f32
# generate inputs
a = np.arange(6, dtype=np.float32).reshape([2, 3])
b = np.full([2, 3], 100, dtype=np.float32)
# generate output
result = matrix_add_f32(a, b)
result
###Output
_____no_output_____
###Markdown
Let's verify that our function works as expected.
###Code
np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Arbitrary-Size Tensor AdditionThe above example created a function to add two matrices of size 2x3. This function won't work if we want to add two matrices of size 4x5 or any other size.
###Code
a = np.arange(20, dtype=np.float32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.float32)
matrix_add_f32(a, b)
###Output
_____no_output_____
###Markdown
While it's nice that the JIT engine is able to detect that there's a size mismatch, it'd be nicer to have a function that can add two tensors of arbitrary size. We'll now show how to create such a function for matrix of 32-bit integers.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_i32(%arga: tensor<?x?xi32>, %argb: tensor<?x?xi32>) -> tensor<?x?xi32> {
// Find the max dimensions of both args
%c0 = constant 0 : index
%c1 = constant 1 : index
%arga_dim0 = tensor.dim %arga, %c0 : tensor<?x?xi32>
%arga_dim1 = tensor.dim %arga, %c1 : tensor<?x?xi32>
%argb_dim0 = tensor.dim %argb, %c0 : tensor<?x?xi32>
%argb_dim1 = tensor.dim %argb, %c1 : tensor<?x?xi32>
%dim0_gt = cmpi "ugt", %arga_dim0, %argb_dim0 : index
%dim1_gt = cmpi "ugt", %arga_dim1, %argb_dim1 : index
%output_dim0 = select %dim0_gt, %arga_dim0, %argb_dim0 : index
%output_dim1 = select %dim1_gt, %arga_dim1, %argb_dim1 : index
%output_memref = memref.alloca(%output_dim0, %output_dim1) : memref<?x?xi32>
%output_tensor = memref.tensor_load %output_memref : memref<?x?xi32>
// Perform addition
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<?x?xi32>, tensor<?x?xi32>)
outs(%output_tensor: tensor<?x?xi32>) {
^bb(%a: i32, %b: i32, %s: i32):
%sum = addi %a, %b : i32
linalg.yield %sum : i32
} -> tensor<?x?xi32>
return %answer : tensor<?x?xi32>
}
"""
###Output
_____no_output_____
###Markdown
The compilation of this MLIR code will be the same as our first example. The main difference is in how we wrote our MLIR code (notice the use of "?X?" when denoting the shapes of tensors).
###Code
# compile
engine.add(mlir_text, passes)
matrix_add_i32 = engine.matrix_add_i32
# generate inputs
a = np.arange(20, dtype=np.int32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result
assert np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Note that we get some level of safety regarding the tensor types as we get an exception if we pass in tensors with the wrong dtype.
###Code
matrix_add_i32(a, b.astype(np.int64))
###Output
_____no_output_____
###Markdown
Note that in the MLIR code, each of our output tensor's dimensions is the max of each dimension of our inputs. A consequence of this is that our function doesn't enforce that our inputs are the same shape.
###Code
# generate differently shaped inputs
a = np.arange(6, dtype=np.int32).reshape([2, 3])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result.shape
result
###Output
_____no_output_____
###Markdown
JIT Engine: Tensor + TensorThis example will go over how to compile MLIR code to a function callable from Python.The example MLIR code we’ll use here performs element-wise tensor addition.Let’s first import some necessary modules and generate an instance of our JIT engine.
###Code
import mlir_graphblas
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
###Output
_____no_output_____
###Markdown
We'll use the same set of passes to optimize and compile all of our examples below.
###Code
passes = [
"--graphblas-structuralize",
"--graphblas-optimize",
"--graphblas-lower",
"--sparsification",
"--sparse-tensor-conversion",
"--linalg-bufferize",
"--func-bufferize",
"--tensor-constant-bufferize",
"--tensor-bufferize",
"--finalizing-bufferize",
"--convert-linalg-to-loops",
"--convert-scf-to-std",
"--convert-memref-to-llvm",
"--convert-math-to-llvm",
"--convert-openmp-to-llvm",
"--convert-arith-to-llvm",
"--convert-math-to-llvm",
"--convert-std-to-llvm",
"--reconcile-unrealized-casts"
]
###Output
_____no_output_____
###Markdown
Fixed-Size Tensor AdditionHere’s some MLIR code to add two 32-bit floating point tensors of with the shape 2x3.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_f32(%arga: tensor<2x3xf32>, %argb: tensor<2x3xf32>) -> tensor<2x3xf32> {
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<2x3xf32>, tensor<2x3xf32>)
outs(%arga: tensor<2x3xf32>) {
^bb(%a: f32, %b: f32, %s: f32):
%sum = arith.addf %a, %b : f32
linalg.yield %sum : f32
} -> tensor<2x3xf32>
return %answer : tensor<2x3xf32>
}
"""
###Output
_____no_output_____
###Markdown
Let's compile our MLIR code.
###Code
engine.add(mlir_text, passes)
###Output
_____no_output_____
###Markdown
Let's try out our compiled function.
###Code
# grab our callable
matrix_add_f32 = engine.matrix_add_f32
# generate inputs
a = np.arange(6, dtype=np.float32).reshape([2, 3])
b = np.full([2, 3], 100, dtype=np.float32)
# generate output
result = matrix_add_f32(a, b)
result
###Output
_____no_output_____
###Markdown
Let's verify that our function works as expected.
###Code
np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Arbitrary-Size Tensor AdditionThe above example created a function to add two matrices of size 2x3. This function won't work if we want to add two matrices of size 4x5 or any other size.
###Code
a = np.arange(20, dtype=np.float32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.float32)
matrix_add_f32(a, b)
###Output
_____no_output_____
###Markdown
While it's nice that the JIT engine is able to detect that there's a size mismatch, it'd be nicer to have a function that can add two tensors of arbitrary size. We'll now show how to create such a function for matrix of 32-bit integers.
###Code
mlir_text = """
#trait_add = {
indexing_maps = [
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>,
affine_map<(i, j) -> (i, j)>
],
iterator_types = ["parallel", "parallel"]
}
func @matrix_add_i32(%arga: tensor<?x?xi32>, %argb: tensor<?x?xi32>) -> tensor<?x?xi32> {
// Find the max dimensions of both args
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%arga_dim0 = tensor.dim %arga, %c0 : tensor<?x?xi32>
%arga_dim1 = tensor.dim %arga, %c1 : tensor<?x?xi32>
%argb_dim0 = tensor.dim %argb, %c0 : tensor<?x?xi32>
%argb_dim1 = tensor.dim %argb, %c1 : tensor<?x?xi32>
%dim0_gt = arith.cmpi "ugt", %arga_dim0, %argb_dim0 : index
%dim1_gt = arith.cmpi "ugt", %arga_dim1, %argb_dim1 : index
%output_dim0 = std.select %dim0_gt, %arga_dim0, %argb_dim0 : index
%output_dim1 = std.select %dim1_gt, %arga_dim1, %argb_dim1 : index
%output_memref = memref.alloca(%output_dim0, %output_dim1) : memref<?x?xi32>
%output_tensor = memref.tensor_load %output_memref : memref<?x?xi32>
// Perform addition
%answer = linalg.generic #trait_add
ins(%arga, %argb: tensor<?x?xi32>, tensor<?x?xi32>)
outs(%output_tensor: tensor<?x?xi32>) {
^bb(%a: i32, %b: i32, %s: i32):
%sum = arith.addi %a, %b : i32
linalg.yield %sum : i32
} -> tensor<?x?xi32>
return %answer : tensor<?x?xi32>
}
"""
###Output
_____no_output_____
###Markdown
The compilation of this MLIR code will be the same as our first example. The main difference is in how we wrote our MLIR code (notice the use of "?X?" when denoting the shapes of tensors).
###Code
# compile
engine.add(mlir_text, passes)
matrix_add_i32 = engine.matrix_add_i32
# generate inputs
a = np.arange(20, dtype=np.int32).reshape([4, 5])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result
assert np.all(result == np.add(a, b))
###Output
_____no_output_____
###Markdown
Note that we get some level of safety regarding the tensor types as we get an exception if we pass in tensors with the wrong dtype.
###Code
matrix_add_i32(a, b.astype(np.int64))
###Output
_____no_output_____
###Markdown
Note that in the MLIR code, each of our output tensor's dimensions is the max of each dimension of our inputs. A consequence of this is that our function doesn't enforce that our inputs are the same shape.
###Code
# generate differently shaped inputs
a = np.arange(6, dtype=np.int32).reshape([2, 3])
b = np.full([4, 5], 100, dtype=np.int32)
# generate output
result = matrix_add_i32(a, b)
result.shape
result
###Output
_____no_output_____ |
Science/SpecificAndLatentHeat/specific-and-latent-heat.ipynb | ###Markdown
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) (Click **Cell** > **Run All** before proceeding.)
###Code
%matplotlib inline
#----------
#Import modules and packages
import ipywidgets as widgets
import random
import math
import matplotlib.pyplot as plt
from ipywidgets import Output, IntSlider, VBox, HBox, Layout
from IPython.display import clear_output, display, HTML, Javascript, SVG
#----------
#import ipywidgets as widgets
#import random
#This function produces a multiple choice form with four options
def multiple_choice(option_1, option_2, option_3, option_4):
option_list = [option_1, option_2, option_3, option_4]
answer = option_list[0]
letters = ["(A) ", "(B) ", "(C) ", "(D) "]
#Boldface letters at the beginning of each option
start_bold = "\033[1m"; end_bold = "\033[0;0m"
#Randomly shuffle the options
random.shuffle(option_list)
#Print the letters (A) to (D) in sequence with randomly chosen options
for i in range(4):
option_text = option_list.pop()
print(start_bold + letters[i] + end_bold + option_text)
#Store the correct answer
if option_text == answer:
letter_answer = letters[i]
button1 = widgets.Button(description="(A)"); button2 = widgets.Button(description="(B)")
button3 = widgets.Button(description="(C)"); button4 = widgets.Button(description="(D)")
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
container = widgets.HBox(children=[button1,button2,button3,button4])
display(container)
print(" ", end='\r')
def on_button1_clicked(b):
if "(A) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Moccasin'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Lightgray'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
def on_button2_clicked(b):
if "(B) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Moccasin'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Lightgray'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
def on_button3_clicked(b):
if "(C) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Moccasin'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Lightgray'; button4.style.button_color = 'Whitesmoke'
def on_button4_clicked(b):
if "(D) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Moccasin'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Lightgray'
button1.on_click(on_button1_clicked); button2.on_click(on_button2_clicked)
button3.on_click(on_button3_clicked); button4.on_click(on_button4_clicked)
###Output
_____no_output_____
###Markdown
Specific and Latent Heat Introduction**Heat** is defined as the *transfer of energy* from one object to another due to a difference in their relative temperatures. As heat flows from one object into another, the temperature of either one or both objects changes. Specific Heat CapacityThe amount of heat required to change the temperature of a given material is given by the following equation:$$Q = m C \Delta T$$where $Q$ represents heat in joules (J), $m$ represents mass kilograms (kg), and $\Delta T$ represents the change in temperature in Celsius (°C) or kelvin (K). The parameter $C$ is an experimentally determined value characteristic of a particular material. This parameter is called the **specific heat** or **specific heat capacity** (J/kg$\cdot$°C). The specific heat capacity of a material is determined by measuring the amount of heat required to raise the temperature of 1 kg of the material by 1°C. For ordinary temperatures and pressures, the value of $C$ is considered constant. Values for the specific heat capacity of common materials are shown in the table below: Material | Specific Heat Capacity (J/kg$\cdot$°C) --- | --- Aluminum | 903 Brass | 376 Carbon | 710 Copper | 385 Glass | 664 Ice | 2060 Iron | 450 Lead | 130 Methanol | 2450 Silver | 235 Stainless Steal | 460 Steam | 2020 Tin | 217 Water | 4180 Zinc | 388 Use the slider below to observe the relationship between the specific heat capacity and the amount of heat required to raise the temperature of a 5 kg mass by 50 °C.
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox, HBox
mass_1 = 5
delta_temperature = 50
specific_heat_capacity = widgets.IntSlider(description="C (J/kg⋅°C)",min=100,max=1000)
#Boldface text between these strings
start_bold = "\033[1m"; end_bold = "\033[0;0m"
def f(specific_heat_capacity):
heat_J = int((mass_1 * specific_heat_capacity * delta_temperature))
heat_kJ = int(heat_J/1000)
print(start_bold + "Heat = (mass) X (specific heat capacity) X (change in temperature)" + end_bold)
print("Heat = ({} X {} X {}) J = {} J or {} kJ".format(mass_1, specific_heat_capacity, delta_temperature, heat_J, heat_kJ))
out1 = widgets.interactive_output(f,{'specific_heat_capacity': specific_heat_capacity,})
HBox([VBox([specific_heat_capacity]), out1])
###Output
_____no_output_____
###Markdown
**Question:** *As the specific heat increases, the amount of heat required to cause the temperature change:*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "Increases"
option_2 = "Decreases"
option_3 = "Remains constant"
option_4 = "Equals zero"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
ExampleHow many kilojoules (kJ) of heat are needed to raise the temperature of a 3.0 kg piece of aluminum from 10°C to 50°C? Round the answer to 2 significant figures.
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out2 = Output()
button_step1 = widgets.Button(description="Step One", layout=Layout(width='20%', height='100%'), button_style='primary')
count1 = 1
text1_1 = widgets.HTMLMath(value="The first step is to identify all known and unknown variables required to solve the problem. In this case, three variables are known ($m$, $C$, $\Delta T$), and one variable is unknown ($Q$):")
text1_2 = widgets.HTMLMath(value="$m$ = 3.0 kg")
text1_3 = widgets.HTMLMath(value="$\Delta T$ = 50°C $-$ 10°C = 40°C")
text1_4 = widgets.HTMLMath(value="$C$ = 903 J/kg$\cdot$°C (The specific heat capacity for aluminum may be found in the table above.)")
text1_5 = widgets.HTMLMath(value="$Q$ = ?")
def on_button_step1_clicked(b):
global count1
count1 += 1
with out2:
clear_output()
if count1 % 2 == 0:
display(text1_1, text1_2, text1_3, text1_4, text1_5)
display(VBox([button_step1, out2]))
button_step1.on_click(on_button_step1_clicked)
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out3 = Output()
button_step2 = widgets.Button(description="Step Two", layout=Layout(width='20%', height='100%'), button_style='primary')
count2 = 1
text2_1 = widgets.HTMLMath(value="Substitute each known variable into the formula to solve for the unknown variable:")
text2_2 = widgets.HTMLMath(value="$Q = mC\Delta T$")
text2_3 = widgets.HTMLMath(value="$Q$ = (3.0 kg) (903 J/kg$\cdot$°C) (40°C) = 108,360 J")
text2_4 = widgets.HTMLMath(value="$Q$ = 108,360 J")
def on_button_step2_clicked(b):
global count2
count2 += 1
with out3:
clear_output()
if count2 % 2 == 0:
display(text2_1, text2_2, text2_3, text2_4)
display(VBox([button_step2, out3]))
button_step2.on_click(on_button_step2_clicked)
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out4 = Output()
button_step3 = widgets.Button(description="Step Three", layout=Layout(width='20%', height='100%'), button_style='primary')
count3 = 1
text3_1 = widgets.HTMLMath(value="Round the answer to the correct number of significant figures and convert to the correct units (if needed):")
text3_2 = widgets.HTMLMath(value="$Q$ = 108,360 J = 110,000 J or 110 kJ")
text3_3 = widgets.HTMLMath(value="The amount of heat required to increase the temperature of a 3.0 kg piece of aluminum from 10°C to 50°C is 110,000 J or 110 kJ.")
def on_button_step3_clicked(b):
global count3
count3 += 1
with out4:
clear_output()
if count3 % 2 == 0:
display(text3_1, text3_2, text3_3)
display(VBox([button_step3, out4]))
button_step3.on_click(on_button_step3_clicked)
###Output
_____no_output_____
###Markdown
PracticeThe heat transfer equation shown above may be rearranged to solve for each variable in the equation. These rearrangements are shown below:$Q = mC\Delta T \qquad m = \dfrac{Q}{C \Delta T} \qquad C = \dfrac{Q}{m \Delta T} \qquad \Delta T = \dfrac{Q}{mC}$Try the four different practice problems below. Each question will require the use of one or more formula above. Use the *Generate New Question* button to generate additional practice problems.
###Code
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
mass = round(random.uniform(25.0, 50.0), 1)
temperature_initial = round(random.uniform(15.0, 25.0), 1)
temperature_final = round(random.uniform(55.0, 65.0), 1)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Print question
question = "How much heat is required to raise the temperature of a {} g sample of {} from {}°C to {}°C?".format(mass, material, temperature_initial, temperature_final)
print(question)
#Answer and option calculations
answer = (mass/1000) * materials[material] * (temperature_final - temperature_initial)
#Define range of values for random multiple choices
mini = 100
maxa = 2300
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Round options to the specified number of significant figures
def round_sf(number, significant):
return round(number, significant - len(str(number)))
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round_sf(int(answer),3)) + " J"
option_2 = str(round_sf(int(choice_list[0]),3)) + " J"
option_3 = str(round_sf(int(choice_list[1]),3)) + " J"
option_4 = str(round_sf(int(choice_list[2]),3)) + " J"
multiple_choice(option_1, option_2, option_3, option_4)
#import math
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
heat = random.randint(10, 250)
temperature_initial = round(random.uniform(10.0, 35.0), 1)
temperature_final = round(random.uniform(45.0, 100.0), 1)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Print question
question = "Suppose some {} lost {} kJ of heat as it cooled from {}°C to {}°C. Find the mass. Note: you will need to make the sign of Q negative because heat is flowing out of the material as it cools.".format(material, heat, temperature_final, temperature_initial)
print(question)
#Answer calculation
answer = (-heat*1000) / (materials[material] * (temperature_initial - temperature_final))
#Define range of values for random multiple choices
mini = 100
maxa = 2000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str('{:.2f}'.format(round(answer,2))) + " kg"
option_2 = str(round(choice_list[0],2)/100) + " kg"
option_3 = str(round(choice_list[1],2)/100) + " kg"
option_4 = str(round(choice_list[2],2)/100) + " kg"
multiple_choice(option_1, option_2, option_3, option_4)
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
heat = round(random.uniform(23.00, 26.00),1)
mass = round(random.uniform(1.00, 3.00), 2)
temperature_initial = round(random.uniform(24.0, 25.0), 1)
temperature_final = round(random.uniform(35.0, 36.0), 1)
#Print question
question = "A newly made synthetic material weighing {} kg requires {} kJ to go from {}°C to {}°C (without changing state). What is the specific heat capacity of this new material?".format(mass, heat, temperature_initial, temperature_final)
print(question)
#Answer calculation
answer = (heat*1000) / (mass * (temperature_final - temperature_initial))
#Define range of values for random multiple choices
mini = 990
maxa = 2510
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Round options to the specified number of significant figures
def round_sf(number, significant):
return round(number, significant - len(str(number)))
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round_sf(int(answer),3)) + " J/(kg°C)"
option_2 = str(round_sf(int(choice_list[0]),3)) + " J/(kg°C)"
option_3 = str(round_sf(int(choice_list[1]),3)) + " J/(kg°C)"
option_4 = str(round_sf(int(choice_list[2]),3)) + " J/(kg°C)"
multiple_choice(option_1, option_2, option_3, option_4)
#import math
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize Variables
heat = random.randint(100, 150)
mass = round(random.uniform(1.0, 5.0), 1)
temperature_initial = round(random.uniform(10.0, 30.0), 1)
temperature_final = round(random.uniform(40.0, 60.0), 1)
#Determine question type
question_type = random.randint(1,3)
if question_type == 1:
#Type 1: Finding change in temperature
question = "If {} kg of {} receives {} kJ of heat, determine its change in temperature to one decimal place.".format(mass, material, heat)
print(question)
answer = (heat*1000) / (materials[material] * mass)
elif question_type == 2:
#Type 2: Finding final temperature
question = "If {} kg of {} receives {} kJ of heat, and if the {}'s initial temperature is {}°C, determine its final temperature to one decimal place. Hint: ΔT = final temperature - initial temperature.".format(mass, material, heat, material, temperature_initial)
print(question)
answer = ((heat*1000) / (materials[material] * mass)) + temperature_initial
elif question_type == 3:
#Type 3: Finding initial temperature
question = "If {} kg of {} receives {} kJ of heat, and if the {}'s final temperature is {}°C, determine its initial temperature to one decimal place. Hint: ΔT = final temperature - initial temperature.".format(mass, material, heat, material, temperature_final)
print(question)
answer = temperature_final - ((heat*1000) / (materials[material] * mass))
#Define range of values for random multiple choices
mini = int(answer*100 - 1000)
maxa = int(answer*100 + 1000)
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str((round(answer,1))) + " °C"
option_2 = str(round(choice_list[0]/100,1)) + " °C"
option_3 = str(round(choice_list[1]/100,1)) + " °C"
option_4 = str(round(choice_list[2]/100,1)) + " °C"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
Change of PhaseIn the previous examples and exercises, the material remained in a constant state while heat was added or taken away. However, the addition or subtraction of heat is often accompanied by a **phase change**. The three most common phases are solid, liquid, and gas: **Problem:** *Determine the amount of heat required to raise the temperature of a 100 g block of ice from -20°C to steam at 200°C.***Attempt:** There are two phase changes in this problem: (1) the melting of ice into water, and (2) the boiling of water into steam. To determine $Q$, let's utilize the heat formula: $$Q=mC\Delta T$$ To solve this problem, we can split it up into steps that are simple to calculate. For example, we can start by calculating the heat required to warm ice from -20°C to 0°C. Then, we can calculate the heat required to warm water from 0°C to 100°C. Finally, we can calculate the heat required to warm steam from 100°C to 200°C:$Q_{ice}$ = (0.100 kg) (2060 J/kg$\cdot$°C) (0°C - (-20°C)) = 4120 J$Q_{water}$ = (0.100 kg) (4180 J/kg$\cdot$°C) (100°C - 0°C) = 41800 J$Q_{steam}$ = (0.100 kg) (2020 J/kg$\cdot$°C) (200°C - 100°C) = 20200 JThen, by adding up the heat calculated in each step, the original problem can be solved: $Q$ = (4120 + 41800 + 20200) J = 66120 J, or 66.1 kJ. ExperimentLet's conduct an experiment to check the above calculation. We will start with a 100 g sample of ice at -20°C, and then add a constant amount of heat until the entire sample is converted to steam at 200°C. Every minute, we will take the temperature of the sample.The data from this experiment is shown in the interactive graphs below. The temperature of the material versus time is shown on left. The heat added to the material versus time is shown on the right.
###Code
#import ipywidgets as widgets
#import matplotlib.pyplot as plt
#from ipywidgets import HBox, Output, VBox
#from IPython.display import clear_output
out5 = Output()
play = widgets.Play(interval=500, value=0, min=0, max=25, step=1, description="Press play", disabled=False)
time_slider = widgets.IntSlider(description='Time (min)', value=0, min=0, max=25, continuous_update = False)
widgets.jslink((play, 'value'), (time_slider, 'value'))
#Make lists of x and y values
x_values = list(range(26))
y_values = [-20, -10, 0, 0, 10, 40, 80, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 120, 140, 160, 180, 200]
heat_y = []
increment = 0
for i in range(26):
heat_y.append(increment)
increment += 13.021
#Plot graphs
def plot_graphs(change):
x = change['new']
with out5:
clear_output(wait=True)
temp_x_values = []
temp_y_values = []
graph2y = []
for i in range(x+1):
temp_x_values.append(x_values[i])
temp_y_values.append(y_values[i])
graph2y.append(heat_y[i])
plt.figure(figsize=(15,5))
plt.style.use('seaborn')
plt.rcParams["axes.edgecolor"] = "black"
plt.rcParams["axes.linewidth"] = 0.5
plt.subplot(1,2,1)
plt.ylim(-30, 210)
plt.xlim(-0.5,26)
plt.scatter(temp_x_values, temp_y_values)
plt.ylabel('Temperature (°C)')
plt.xlabel('Time (min)')
plt.subplot(1,2,2)
plt.ylim(-25, 350)
plt.xlim(-2,26)
plt.scatter(temp_x_values, graph2y, color='red')
plt.ylabel('Heat (kJ)')
plt.xlabel('Time (min)')
plt.show()
#Get slider value
time_slider.observe(plot_graphs, 'value')
plot_graphs({'new': time_slider.value})
#Display widget
display(HBox([play, time_slider]))
display(out5)
###Output
_____no_output_____
###Markdown
**Question**: *Examine the graph on the left. It shows the temperature of the material at each minute. At what temperature(s) does the temperature remain constant for some time?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "0°C and 100°C. We have horizontal lines at those temperatures."
option_2 = "-20°C, 0°C, 100°C, and 200°C."
option_3 = "100°C."
option_4 = "The temperature is never constant."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Question:** *Examine the graph on the right. It shows how much heat was required to turn a block of ice at -20°C into steam at 200°C. Does this agree with the value we arrived at from our above calculation (66.1 kJ)?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "Based on the graph, the amount of heat required is around 325 kJ. It does not agree with our calculation."
option_2 = "Based on the graph, the amount of heat required is close enough to our calculation; hence, it does agree."
option_3 = "Both values match perfectly."
option_4 = "The values are close and it is impossible to say if they match perfectly or not."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Question**: *Examine the graph on the right. Observe that the slope of the line is constant. What does this imply?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The amount of heat added to the system is constant for the entire 25 min period."
option_2 = "The amount of heat added to the system is not constant, the rate increases throughout the 25 min period."
option_3 = "No heat is added at the start, but around 325 kJ of heat is added at the very end."
option_4 = "As time increases, the amount of heat required decreases."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
Experimental ResultsOur experimental data indicates that our calculation of 66.1 kJ is incorrect and that it in fact takes around 325 kJ to heat ice from -20°C to steam at 200°C. *So what did we miss?***Answer:** The *phase changes*.The graph on the right shows us that the rate heat was added to the system over the 25 minute period was constant, yet the temperature remained constant at two points for some time (0°C and 100°C). How is this possible? That is, *how can we add heat to a material while its temperature remains constant?***Answer:** Every material has two common "critical temperature points". These are the points at which the *state* of the material *changes*. For water, these points are at 0°C and 100°C. If heat is coming into a material *during a phase change*, then this energy is used to overcome the intermolecular forces between the molecules of the material.Let's consider when ice melts into water at 0°C. Immediately after the molecular bonds in the ice are broken, the molecules are moving (vibrating) at the same average speed as before, and so their average kinetic energy remains the same. *Temperature* is precisely a measure of the average kinetic energy of the particles in a material. Hence, during a phase change, the temperature remains constant. Latent Heat of Fusion and Vaporization The **latent heat of fusion ($H_f$)** is the quantity of heat needed to melt 1 kg of a solid to a liquid without a change in temperature.The **latent heat of vaporization ($H_v$)** is the quantity of heat needed to vaporise 1 kg of a liquid to a gas without a change in temperature.The latent heats of fusion and vaporization are empirical characteristics of a particular material. As such, they must be experimentally determined. Values for the latent heats of fusion and vaporization of common materials are shown in the table below:Materials | Heat of Fusion (J/kg) |Heat of Vaporization (J/kg) --- | --- | --- Copper | $2.05 \times 10^5$ | $5.07 \times 10^6$ Gold | $6.03 \times 10^4$ | $1.64 \times 10^6$ Iron | $2.66 \times 10^5$ | $6.29 \times 10^6$ Lead | $2.04 \times 10^4$ | $8.64 \times 10^5$ Mercury | $1.15 \times 10^4$ | $2.72 \times 10^5$ Methanol | $1.09 \times 10^5$ | $8.78 \times 10^5$ Silver | $1.04 \times 10^4$ | $2.36 \times 10^6$ Water (ice) | $3.34 \times 10^5$ | $2.26 \times 10^6$ The following formulae are used to calculate the amount of heat needed to change a material from a solid to a liquid (fusion), or from a liquid to a gas (vaporization):$Q_f = mH_f \qquad Q_v = mH_v$ Example (revisited)Recall our previous problem:**Problem:** *Determine the amount of heat required to raise the temperature of a 100 g block of ice from -20°C to steam at 200°C.***Solution:** Previously, we split the problem into three steps. It turns out that those steps correctly calculated the heat required to warm ice from -20°C to 0°C, water from 0°C to 100°C. and steam from 100°C to 200°C. What was absent was the latent heat required to complete the phase changes at 0°C and 100°C. Therefore, we need to **add two more steps**, which incorporate the above formulae. For completion, the previous steps are restated and the entire calculation is shown in **five steps** below (plus a final step to sum up the heats calculated in the previous steps):
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox, HBox
#from IPython.display import clear_output, SVG, HTML, display
out6 = Output()
frame_1 = 1
#Toggle images
def show_steps_1():
global frame_1
I11 = widgets.HTMLMath(value="Step 1: Calculate the heat required to change ice from -20°C to 0°C. Since the temperature changes, we use $Q = mCΔT$.")
Q11 = widgets.HTMLMath(value="$Q_{1}$ = (0.1 kg) (2060 J/kg°C) (0°C - (-20°C)) = 4120 J")
I12 = widgets.HTMLMath(value="Step 2: Calculate the heat required to change ice at 0°C to water at 0°C. Since the temperature does not change as we are at the melting point of water, we use $Q = mH_{f}$.")
Q12 = widgets.HTMLMath(value="$Q_{2}$ = (0.1 kg) (334000 J/kg) = 33400 J")
I13 = widgets.HTMLMath(value="Step 3: Calculate the heat required to change water from 0°C to 100°C. Since the temperature changes, we use $Q = mCΔT$.")
Q13 = widgets.HTMLMath(value="$Q_{3}$ = (0.1 kg) (4180 J/kg°C) (100°C - 0°C) = 41800 J")
I14 = widgets.HTMLMath(value="Step 4: Calculate the heat required to change water at 100°C to steam at 100°C. Since the temperature does not change at we are at the boiling point of water, we use $Q = mH_{v}$.")
Q14 = widgets.HTMLMath(value="$Q_{4}$ = (0.1 kg) (2260000 J/kg) = 226000 J")
I15 = widgets.HTMLMath(value="Step 5: Calculate the heat required to change steam from 100°C to 200°C. Since the temperature changes, we use $Q = mCΔT$.")
Q15 = widgets.HTMLMath(value="$Q_{5}$ = (0.1 kg) (2020 J/kg°C) (200°C - 100°C) = 20200 J")
I16 = widgets.HTMLMath(value="Summary: Calculate total heat by adding up the values calculated in the previous steps. $Q$ = $Q_1$ + $Q_2$ + $Q_3$ + $Q_4$ + $Q_5$")
Q16 = widgets.HTMLMath(value="$Q$ = (4120 + 33400 + 41800 + 226000 + 20200) J = 325520 J or 326 kJ")
if frame_1 == 0:
display(SVG("Images/phase_diagram_1_0.svg"))
frame_1 = 1
elif frame_1 == 1:
display(SVG("Images/phase_diagram_1_1.svg"))
display(I11, Q11)
frame_1 = 2
elif frame_1 == 2:
display(SVG("Images/phase_diagram_1_2.svg"))
display(I11, Q11, I12, Q12)
frame_1 = 3
elif frame_1 == 3:
display(SVG("Images/phase_diagram_1_3.svg"))
display(I11, Q11, I12, Q12, I13, Q13)
frame_1 = 4
elif frame_1 == 4:
display(SVG("Images/phase_diagram_1_4.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14)
frame_1 = 5
elif frame_1 == 5:
display(SVG("Images/phase_diagram_1_5.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14, I15, Q15)
frame_1 = 6
elif frame_1 == 6:
display(SVG("Images/phase_diagram_1_6.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14, I15, Q15, I16, Q16)
frame_1 = 0
button_phase_diagram_1 = widgets.Button(description="Show Next Step", button_style = 'primary')
display(button_phase_diagram_1)
def on_submit_button_phase_diagram_1_clicked(b):
with out6:
clear_output(wait=True)
show_steps_1()
with out6:
display(SVG("Images/phase_diagram_1_0.svg"))
button_phase_diagram_1.on_click(on_submit_button_phase_diagram_1_clicked)
display(out6)
###Output
_____no_output_____
###Markdown
**Note:** that the *state* of a material can include more than one *phase*. For example, at 0°C, the state of water includes both solid (ice) and liquid (water) phases. At 100°C, the state of water includes both liquid (water) and gas (steam) phases.It is common to cool down a material (as opposed to heating it up). In this scenario, heat must be taken away. By convention, a negative $Q$ is used to represent heat being taken away from a material (cooling), while a positive $Q$ is used to represent heat being added to a material (warming). Be aware of the sign of $Q$ as it indicates the direction the heat is flowing. For $Q=mH_f$ and $Q=mH_v$, you must be aware of whether heat is being added to or taken away from the material. If heat is being taken away, then a negative sign must be placed in front of $H_f$ and $H_v$. It is not necessary for each problem to be five steps. A problem could have 1-5 steps depending on the situation. Let's do another example together. An interactive graph is provided to help determine the number of steps required. ExampleHow much heat must be removed to change 10.0 g of steam at 120.0°C to water at 50°C? Round to two significant figures.
###Code
#import ipywidgets as widgets
#import matplotlib.pyplot as plt
#from ipywidgets import HBox, Output, VBox
#from IPython.display import clear_output
out7 = Output()
play2 = widgets.Play(interval=500, value=0, min=0, max=25, step=1, description="Press play", disabled=False)
time_slider2 = widgets.IntSlider(description='Time', value=0, min=0, max=20, continuous_update = False)
widgets.jslink((play2, 'value'), (time_slider2, 'value'))
#Make lists of x and y values
x_values2 = list(range(21))
y_values2 = [120, 110, 100, 100, 100, 100, 100, 100, 100, 100, 100, 95, 90, 85, 80, 75, 70, 65, 60, 55, 50]
heat_y2 = []
increment2 = 0
for i in range(26):
heat_y2.append(increment2)
increment2 += 13021
#Plot graph
def time_temp(change):
x = change['new']
with out7:
clear_output(wait=True)
temp_x_values2 = []
temp_y_values2 = []
graph2y2 = []
for i in range(x+1):
temp_x_values2.append(x_values2[i])
temp_y_values2.append(y_values2[i])
graph2y2.append(heat_y2[i])
plt.figure(figsize=(7,5))
plt.style.use('seaborn')
plt.rcParams["axes.edgecolor"] = "black"
plt.rcParams["axes.linewidth"] = 0.5
plt.ylim(0, 150)
plt.xlim(-0.5,26)
plt.xticks([])
plt.scatter(temp_x_values2, temp_y_values2)
plt.ylabel('Temperature (°C)')
plt.xlabel('Time')
plt.figtext(0.5, 0.01, "This graph consists of three line-segments. This indicates that we require three steps.", wrap=True, horizontalalignment='center', fontsize=12)
plt.show()
#Get slider value
time_temp({'new': time_slider2.value})
time_slider2.observe(time_temp, 'value')
#Display widget
display(HBox([play2, time_slider2]))
display(out7)
#import ipywidgets as widgets
#from IPython.display import clear_output, SVG
out8 = widgets.Output()
frame_2 = 1
#Toggle images
def show_steps_2():
global frame_2
I21 = widgets.HTMLMath(value="Step 1: Calculate the heat loss required to change steam from 120°C to 100°C. Since there is no phase change taking place, we use $Q = mCΔT$.")
Q21 = widgets.HTMLMath(value="$Q_{1}$ = (0.01 kg) (2020 J/kg°C) (100°C - 120°C) = -404 J")
I22 = widgets.HTMLMath(value="Step 2: Calculate the heat loss required to change steam at 100°C to water at 100°C. Since a phase change is taking place (condensation), we use $Q = -mH_{v}$.")
Q22 = widgets.HTMLMath(value="$Q_{2}$ = - (0.01 kg) (2260000 J/kg) = -22600 J")
I23 = widgets.HTMLMath(value="Step 3: Calculate the heat loss required to change water from 100°C to 50°C. Since there is no phase change taking place, we use $Q = mCΔT$.")
Q23 = widgets.HTMLMath(value="$Q_{3}$ = (0.01 kg) (4180 J/kg°C) (50°C - 100°C) = -2090 J")
I24 = widgets.HTMLMath(value="Summary: Calculate the total heat loss by adding up the values calculated in the previous steps. $Q$ = $Q_1$ + $Q_2$ + $Q_3$")
Q24 = widgets.HTMLMath(value="$Q$ = (-404 + -22600 + -2090) J = -25000 J or -25 kJ")
if frame_2 == 0:
display(SVG("Images/phase_diagram_2_0.svg"))
frame_2 = 1
elif frame_2 == 1:
display(SVG("Images/phase_diagram_2_1.svg"))
display(I21, Q21)
frame_2 = 2
elif frame_2 == 2:
display(SVG("Images/phase_diagram_2_2.svg"))
display(I21, Q21, I22, Q22)
frame_2 = 3
elif frame_2 == 3:
display(SVG("Images/phase_diagram_2_3.svg"))
display(I21, Q21, I22, Q22, I23, Q23)
frame_2 = 4
elif frame_2 == 4:
display(SVG("Images/phase_diagram_2_4.svg"))
display(I21, Q21, I22, Q22, I23, Q23, I24, Q24)
frame_2 = 0
button_phase_diagram_2 = widgets.Button(description="Show Next Step", button_style = 'primary')
display(button_phase_diagram_2)
def on_submit_button_phase_diagram_2_clicked(b):
with out8:
clear_output(wait=True)
show_steps_2()
with out8:
display(SVG("Images/phase_diagram_2_0.svg"))
button_phase_diagram_2.on_click(on_submit_button_phase_diagram_2_clicked)
display(out8)
###Output
_____no_output_____
###Markdown
PracticeThere are many variations that are possible with specific heat and latent heat questions. Use the *Generate New Question* button to generate additional practice problems. These practice problems will vary from one to five steps. **One Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety1 = random.randint(1,5)
if variety1 == 1:
#Makes certain that initial and final temps are different
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(-50.0, 0.0), 1)
temperature_final = round(random.uniform(-50.0, 0.0), 1)
question = "How much heat is needed for a {} g block of ice at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2060 * (temperature_final - temperature_initial)
elif variety1 == 2:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(0.0, 100.0), 1)
temperature_final = round(random.uniform(0.0, 100.0), 1)
question = "How much heat is needed for {} g of water at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 4180 * (temperature_final - temperature_initial)
elif variety1 == 3:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed for {} g of steam at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2020 * (temperature_final - temperature_initial)
elif variety1 == 4:
temperature_initial = 0
temperature_final = 0
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 334000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -334000
elif variety1 == 5:
temperature_initial = 100
temperature_final = 100
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2260000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -2260000
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Two Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety2 = random.randint(1,4)
if variety2 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = 0
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif variety2 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif variety2 == 3:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif variety2 == 4:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 100
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Three Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety3 = random.randint(1,2)
if variety3 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000)
elif variety3 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Four Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety4 = random.randint(1,2)
if variety4 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif variety4 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100-temperature_initial)) + ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Five Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
chosen_material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(100 - 0)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000)*4180*(0 - 100)) + ((mass/1000)*2060*(temperature_final - 0)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Mixed Step Problems**In the dropdown menus below, select how many steps are required and select the correct amount of heat required for each question.**Hint:** Have some scrap-paper nearby for the calculations and be sure to sketch a diagram of each scenario to determine how many steps are required.
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
#Determine question type
question_type = random.randint(1,5)
if question_type == 1:
#Type 1: One Step
steps = "One Step"
type1_variety = random.randint(1,5)
if type1_variety == 1:
#Makes certain that initial and final temps are different
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(-50.0, 0.0), 1)
temperature_final = round(random.uniform(-50.0, 0.0), 1)
question = "How much heat is needed for a {} g block of ice at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2060 * (temperature_final - temperature_initial)
elif type1_variety == 2:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(0.0, 100.0), 1)
temperature_final = round(random.uniform(0.0, 100.0), 1)
question = "How much heat is needed for {} g of water at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 4180 * (temperature_final - temperature_initial)
elif type1_variety == 3:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed for {} g of steam at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2020 * (temperature_final - temperature_initial)
elif type1_variety == 4:
temperature_initial = 0
temperature_final = 0
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 334000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -334000
elif type1_variety == 5:
temperature_initial = 100
temperature_final = 100
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2260000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -2260000
elif question_type == 2:
#Type 2: Two Steps
steps = "Two Steps"
type2_variety = random.randint(1,4)
if type2_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = 0
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif type2_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif type2_variety == 3:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif type2_variety == 4:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 100
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif question_type == 3:
#Type 3: Three Steps
steps = "Three Steps"
type3_variety = random.randint(1,2)
if type3_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000)
elif type3_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000) * -2260000)
elif question_type == 4:
#Type 4: Four Steps
steps = "Four Steps"
type4_variety = random.randint(1,2)
if type4_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif type4_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100-temperature_initial)) + ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif question_type == 5:
#Type 5: Five Steps
steps = "Five Steps"
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(100 - 0)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000)*4180*(0 - 100)) + ((mass/1000)*2060*(temperature_final - 0)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
option_list = [option_1, option_2, option_3, option_4]
correct_answer = option_list[0]
#Randomly shuffle the options
random.shuffle(option_list)
#Create dropdown menus
dropdown1_1 = widgets.Dropdown(options={' ':0,'One Step': 1, 'Two Steps': 2, 'Three Steps': 3, 'Four Steps': 4, 'Five Steps': 5}, value=0, description='Steps',)
dropdown1_2 = widgets.Dropdown(options={' ':0,option_list[0]: 1, option_list[1]: 2, option_list[2]: 3, option_list[3]: 4}, value=0, description='Answer',)
#Display menus as 1x2 table
container1_1 = widgets.HBox(children=[dropdown1_1, dropdown1_2])
display(container1_1), print(" ", end='\r')
#Evaluate input
def check_answer_dropdown(b):
answer1_1 = dropdown1_1.label
answer1_2 = dropdown1_2.label
if answer1_1 == steps and answer1_2 == correct_answer:
print("Correct! ", end='\r')
elif answer1_1 != ' ' and answer1_2 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown1_1.observe(check_answer_dropdown, names='value')
dropdown1_2.observe(check_answer_dropdown, names='value')
###Output
_____no_output_____
###Markdown
Conclusions* The **specific heat capacity** of a material is an empirically determined value characteristic of a particular material. It is defined as the amount of heat needed to raise the temperature of 1 kg of the material by 1°C.* We use the formula $Q=mc\Delta T$ to calculate the amount of heat required to change the temperature of a material in which there is no change of phase.* The **latent heat of fusion** ($H_f$) is defined as the amount of heat needed to melt 1 kg of a solid without a change in temperature.* The **latent heat of vaporization** ($H_v$) is define as the amount of heat needed to vaporise 1 kg of a liquid without a change in temperature.* We use the formula $Q=mH_f$ to calculate the heat required to change a material from a solid to a liquid, or from a liquid to a solid.* We use the formula $Q=mH_v$ to calculate the heat required to change a material from a liquid to a gas, or from a gas to a liquid.* If heat is being taken away, then a negative sign must be placed in front of $H_f$ and $H_v$.* We use a combination of the above formulae to compute the heat required to change a material from an initial temperature to a final temperature when one (or more) phase changes occur across a range of temperatures.Images in this notebook represent original artwork.
###Code
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
###Output
_____no_output_____ |
Python-fundamentals-By-Mohsin Nazakat.ipynb | ###Markdown
Table of Content:=============================* [1. What is Python?](1) * [1.1 History:](1.1) * [1.2 Present:](1.2) * [1.3 Future:](1.3) * [1.4 Strengths of python:](1.4) * [1.5 Weaknesses of python:](1.5) * [1.6 Suitablity of python:](1.6) * [1.7 Structure of python:](1.7) * [1.8 Sample code in python:](1.8)* [2. DataTypes in python](2) * [2.1 Strings](2.1) * [2.2 Numeric Data Types](2.2) * [2.2.1 Int Datatype](int) * [2.2.2 Float Datatype](float) * [2.2.3 Complex Datatype](complex) * [2.3 Sequence Data Types](2.3) * [2.3.1 List](2.3.1) * [2.3.2 Tupple](2.3.2) * [2.4 Dictionary](2.4) * [2.5 Set](2.5) * [2.6 Boolean](2.6)* [3. Comparison Operators](3)* [4. if-else Statement](4) * [4.1 elif](4.1)* [5. Loops](5) * [5.1 For Loop](5.1) * [5.2 While Loop](5.2)* [6. Functions](6) * [6.1 Built-in Functions](6.1) * [6.2 User Define Functions](6.2)* [7. Lambda Functions](7) * [7.1 Map()](7.1) * [7.2 Filter()](7.2) * [7.3 Reduce()](7.3)* [8. File I/O](8) * [8.1 From Buffer](8.1) * [8.2 From Text File](8.2) * [8.3 Postioning](8.3)* [9. Introduction to Pandas](9) * [9.1 History](9.1) * [9.2 Features](9.1) * [9.3 Purpose](9.3)* [10. Series in Pandas](10) * [10.1 Defination](10.1) * [10.2 Creating series from dict](10.2) * [10.3 Creating series from ndarray](10.3) * [10.4 Creating series from scaler values](10.4) * [10.5 Accessing Series](10.5) * [10.6 Performing operations on series](10.6) * [10.7 Vectorized operations on series](10.7) * [10.8 Naming of series](10.8)* [11. DataFrame in Pandas](11) * [11.1 Defination ](11.1) * [11.2 Creating DataFrame from List](11.2) * [11.3 From dict of series](11.3) * [11.4 From dict of ndarrays / lists](11.4) * [11.5 From a list of dicts](11.5) * [11.6 From a dict of tuples ](11.6) * [11.7 Column selection, addition, deletion](11.7) * [11.8 Indexing / Selection](11.8) * [11.9 Data alignment and arithmetic](11.9) * [11.10 Boolean Operators ](11.10) * [11.11 Transposing](11.11) * [12. Viewing Data in Pandas](12) * [12.1 Head of Data](12.1) * [12.2 tail of Data](12.2) * [12.3 Display Index](12.3) * [12.4 Display Column](12.4) * [12.5 Printing Values](12.5) * [12.6 Sorting by axis](12.6) * [12.7 Sorting by values](12.7) * [12.8 Describing DataFrame](12.8) * [12.9 Selecting a Column](12.9) * [12.10 Slicing](12.10) 1. What is Python? ===============================**Python is a well-known programming language widely adopted by Programmers , Data Scientists , ML(Machine Learning) and AI(Artificial Intelligence) Engineers ,these days. It is an interpreted high level language in which we can implement the concepts of object oriented programing as well. It is easy to learn and easy to implement language. Python supports modularity of the code which increase reusability of the code. Python has a large active community which is way there are thousands of open source libraries for python are avaible on the internet to facilitate the coders. As there is no compilation in python so debugging of code is comparitively easy here.** 1.1 History: ==============**The project python was found in 1980s and in around 1989 Guido Van Rossum started implementing it. Later on in 1991 published the very first code with version (0.9.0). With the passage of time a lot of version was released but the revolutionary one was python 3.0 (released on December 3, 2008). This was the moment when python start getting fame and adoption in the world of computer science** 1.2 Present: =================**At this time, the latest version of python is (3.9.2). Currently python is widely being used in:**- Machine Learning - Deep Learning - Artificial Inteligence - Natural Language Processing - Web Development - Data Analysis and Processing - Big Data - Cloud Computing - Game Development **So we can say that among all programing languages python has the largest scope of implementation** 1.3 Future: =============== **As we know the that future of Computer Science is strongly couppled with AI and ML and python has a great significance for both thsee domains so we can deduct that the Python is the language of future . If we observe the Statistics of based on google searches we'll find out that the python is at the top and its popularity is increasing at the rate of 4.1% every year.** 1.4 Strengths of python: ================================- Simplicity: **The syntax of python is simple and it is quite easy for developers to switch on it.**- Supports Multiple Programming Paradigms: **It has full support of object oriented and structured programing.**- Liberaries: **A large no of robust standard libararies are avaible for python.**- Open source frameworks and tools : **we have different tools avaiable for different needs. e.g. Django, Flask and etc.** 1.5 Weaknesses of python: ==================================- Speed: **As python is an interpreter so it is compartively slower than C and C++ which use compilers as translators.**- Inter version incompatibility: **Devlepers might face problem if some of the module are written in python 2.0 and others are in python 3.0**- Under developed database access layer: **Database access layer of python is not as good as JDBC(Java Database Connectivity) and ODBC(Open Database Connectivity). It is underdeveloped and not often used at enterprise level**- Not native to mobile : **When it comes to mobiles python is not a good choice for development as it is not native to mobile environment. Neither Android nor iOS supports python as an official programing language for mobile development.** 1.6 Suitablity of python: ==================================**Talking about the current era the scale of suitablity of python is huge. you can use it for**- Web Development - Game Development - Machine Learning and Artificial Intelligence - Data Science and Data Visualization - Desktop GUI - Web Scraping Applications - Business Applications - Audio and Video Applications - CAD Applications - Embedded Applications 1.7 Structure of python: =================================== **Python is an interpreted language and the structure of python code is simpler than many other programming languages. It uses the concepts of indentation which makes the code written in python highly maitainable and readable** 1.8 Sample code in python: ====================================
###Code
#here's the code check even and odd in python
a=1
print("Output:")
if(a%2):
print("The number is odd")
else:
print("The number is even")
###Output
Output:
The number is odd
###Markdown
2. DataTypes in Python: =====================================**Unlike other programming languages in python we do not mention the datatypes before declaring a variable it rather is defined when we assign value to it on the basis of the value which is being assigned. It could be a string, int, float, complex, boolean, tuple and etc.** 2.1 Strings: =============== Defination:============ **In python strings are the datatypes which contains text elclosed by either ' '(single qoutes) or " "(double quotes).** Purpose:=========**The purpose of string is simply to store text type data. A string could consist of single charecter like "a" and many charecters like "apple". Besides a string might be without spaces like "mohsin" and it might contains blank spaces as well like "mohsin is a student"** Importance:=============**String is an important datatype because it allows us to store text in simplest possible manner. If we don't have strings dealing with textual data would be quite tricky.** Application:============= **we use strings when we need to:*** Send textual data to program * Recive textual data from program and display to user. Strength:============= **In Python strings are actually arrays of bytes you can access a particular byte using [index].** Weakness:============= **In python if we have a sigle charecter we have to make a string for that like charecter="a" as python do not have any datatype dealing with single charecters.** Example 1 Defining strings: ============================== Code:
###Code
name="Mohsin"
print("Output:")
print(name)
###Output
Output:
Mohsin
###Markdown
Example 2 Concatinating strings:==================================== Code:
###Code
word1="hello"
word2="world"
sentence= word1+" "+word2
print("Output:")
print(sentence)
###Output
Output:
hello world
###Markdown
Example 3 Spliting the strings:================================== Code:
###Code
sentence= "Pakistan is my homeland"
print("Output:")
print(sentence.split(" "))
#splitting the whole sentence into substrings after every blank space
###Output
Output:
['Pakistan', 'is', 'my', 'homeland']
###Markdown
Example 4 Slicing strings:=============================== Code:
###Code
sentence= "Pakistan Zindabad"
print("Output:")
print(sentence[0:3])
###Output
Output:
Pak
###Markdown
Example 5 Insert numbers into strings:========================================== Code:
###Code
amount=500
print("Output:")
print(f"the amount in your bank is {amount}")
###Output
Output:
the amount in your bank is 500
###Markdown
Example 6 Replacing the strings:==================================== Code:
###Code
sentence= "hello world"
print("Output:")
print(sentence.replace("h","H"))
###Output
Output:
Hello world
###Markdown
2.2 Numeric Data Types: ================================ Defination:============ In Python we have 3 types of numeric datatypes - int: deals with integers (negative and positive)- float: deals with floating numbers (negative and positive) - complex: deals with imagnary/complex numbers Purpose:==========**Purpose of having 3 different numeric datatypes in python we need to deal with differnt natures of numeric data and it is not possible for us to deal with all those data using a single type** Importance:============ **These datatypes are important as numbers plays a big role in life and while developing most of solutions for a problem related to real life numbers are involved in it as foundational blocks.** Application:============ **We use numeric datatypes when we are dealing with calculations and it makes scope of these datatypes quite huge as calculations are involved everywhere.** Strength:============ Here are few plus points about Numeric datatypes in python:* In int type we can have (positive and negative) numbers of ulimited length. * We can deal with scientfic notation numbers using float type.* We have a separate data type "complex" for dealing with imaginary numbers. * These numeric datatypes are inter convertable, means that you can change int to float and vice versa. Weakness:============ **As python is dynamically-typed language the others could get confused when reading your code.** Example 1: Class of int type ===================================
###Code
num1= 5
num2=46456460404
num3=-155464564
print("Output:")
print("datatype of num1 =",type(num1))
print("datatype of num2 =",type(num2))
print("datatype of num3 =",type(num3))
###Output
Output:
datatype of num1 = <class 'int'>
datatype of num2 = <class 'int'>
datatype of num3 = <class 'int'>
###Markdown
Example 2: Prefixing Integers=================================== we can represent integers in python in the form of binary, octal and hexa decimal by applying prefexes.- 0b for binary - 0o for octal - 0x for hexa decimal
###Code
#decimal number
decimal_number=5
#binnary number
binary_number=0b111001
#octal number
octal_number= 0o57
#hexadecimal number
hex_number=0x10
print("Output:")
print("Decimal number= ", decimal_number)
print("0b111001 in decimal = ", binary_number)
print("0o57 in decimal = ", octal_number)
print("0x10 in decimal = ", hex_number)
###Output
Output:
Decimal number= 5
0b111001 in decimal = 57
0o57 in decimal = 47
0x10 in decimal = 16
###Markdown
Example 3: Class of float type ===================================
###Code
float1=5.04
float2=-455.464
float3= 5/7
float4= float('-infinity')#float can store -ve and positive infinity
float5= float('nan') #it also stores nan=not any number format
float6= float(3e-5)#it also stores exponential numbers.
print("Output:")
print("datatype of float1 =",type(float1))
print("datatype of float2 =",type(float2))
print("datatype of float3 =",type(float3))
print("datatype of float4 =",type(float4))
print("datatype of float5 =",type(float5))
print("datatype of float6 =",type(float6))
###Output
Output:
datatype of float1 = <class 'float'>
datatype of float2 = <class 'float'>
datatype of float3 = <class 'float'>
datatype of float4 = <class 'float'>
datatype of float5 = <class 'float'>
datatype of float6 = <class 'float'>
###Markdown
Example 4: Complex Numbers =================================
###Code
#in complex datatype we use j to represent imagnary part
complex_no1= 5+2j
complex_no2= 3j
complex_no3= 5-2j
print("Output:")
print("type of complex_no1=",type(complex_no1))
print("type of complex_no2=",type(complex_no2))
print("type of complex_no3=",type(complex_no3))
###Output
Output:
type of complex_no1= <class 'complex'>
type of complex_no2= <class 'complex'>
type of complex_no3= <class 'complex'>
###Markdown
Example 5: Type conversions=================================
###Code
int_num = 10 # int
float_num = 3.14 # float
complex_num = 5j # complex
#convert from int to float:
converted_float = float(int_num)
#convert from float to int:
converted_int = int(float_num)
#convert from int to complex:
converted_complex = complex(complex_num)
print("Output:")
print(converted_float,type(converted_float))
print(converted_int,type(converted_int))
print(converted_complex,type(converted_complex))
###Output
Output:
10.0 <class 'float'>
3 <class 'int'>
5j <class 'complex'>
###Markdown
2.3 Sequence Data Types: =================================== Defination:============= Sequence datatypes allows us to efficently store multiple values in an organized manner. In python we have these sequence datatypes. - List - Tupple - Range - Strings (discuessed earlier) Go to Strings 2.2.1 List: ============= Defination:============= In python List is a datatype in which we can store multiple items. In list we have multiple elements seprated by commas. List is one of the widely used datatype of in python when we are dealing with collections. Strength:============= - The elements of lists are ordered- The elements of lists can be duplicate- The elements of lists are changeable- It use contigous memory block so the indexing is fast here Suitability:============= - You need an mutable collection you prefer to use a list Example1: Decleration and Indexing======================================= Here you can see how to decleare a list and how indexing works in a list.
###Code
people= ["teacher", "student", "doctor", "engineer"]
print("Output:")
print(type(people))
print(people)
print("index[0]",people[0])
print("index[1]",people[1])
print("index[2]",people[2])
print("index[3]",people[3])
###Output
Output:
<class 'list'>
['teacher', 'student', 'doctor', 'engineer']
index[0] teacher
index[1] student
index[2] doctor
index[3] engineer
###Markdown
Example2: Mutablity===================== **here you will see that how can we make changes in a list**
###Code
people= ["teacher", "student", "doctor", "engineer"]
print("Output:")
print("List before changes:",people)
print("index[0]",people[0])
print("index[1]",people[1])
print("index[2]",people[2])
print("index[3]",people[3])
#here we are going to make chagnes in list
people[2]= "lawyer"
people[3]= "painter"
print("List after changes:",people)
print("index[0]",people[0])
print("index[1]",people[1])
print("index[2]",people[2])
print("index[3]",people[3])
###Output
Output:
List before changes: ['teacher', 'student', 'doctor', 'engineer']
index[0] teacher
index[1] student
index[2] doctor
index[3] engineer
List after changes: ['teacher', 'student', 'lawyer', 'painter']
index[0] teacher
index[1] student
index[2] lawyer
index[3] painter
###Markdown
Example3: Finding length of List==================================
###Code
people= ["teacher", "student", "doctor", "engineer"]
print("Output:")
print("Lenght of list of people=", len(people))
###Output
Output:
Lenght of list of people= 4
###Markdown
Example4: Concatination in List==================================
###Code
fruits=["apple", "grapes", "oranges"]
flowers=["rose", "lilly","sunflower"]
concatinated_list= fruits+flowers
print("Output:")
print("list of after concatination is ", concatinated_list)
###Output
Output:
list of after concatination is ['apple', 'grapes', 'oranges', 'rose', 'lilly', 'sunflower']
###Markdown
Example5: Slicing in List=============================
###Code
people= ["teacher", "student", "doctor", "engineer"]
#slicing from left side
print(people[1:])
print(people[2:])
print(people[3:])
print(people[4:])
###Output
['student', 'doctor', 'engineer']
['doctor', 'engineer']
['engineer']
[]
###Markdown
Example6: Reversing a list===============================
###Code
people= ["teacher", "student", "doctor", "engineer"]
fruits=["apple", "grapes", "oranges"]
flowers=["rose", "lilly","sunflower"]
print("Output:")
print("Lists befor reversing are")
print(people)
print(fruits)
print(flowers)
people.reverse()
fruits.reverse()
flowers.reverse()
print("\nLists after reversing are:")
print(people)
print(fruits)
print(flowers)
###Output
Output:
Lists befor reversing are
['teacher', 'student', 'doctor', 'engineer']
['apple', 'grapes', 'oranges']
['rose', 'lilly', 'sunflower']
Lists after reversing are:
['engineer', 'doctor', 'student', 'teacher']
['oranges', 'grapes', 'apple']
['sunflower', 'lilly', 'rose']
###Markdown
2.3.2 Tuple: ============ Defination:=========== Tuple in python is a data type which is used to store collections of data. Unlike list, it is not enclosed by [ ] by square brackets but by () parenthesis. Strength:========= - The elements of lists are ordered- The elements of lists can be duplicate- The stored elements are hetrogenous. Weakness:========= - The stored elements cannot be changed as tuple is immutable. Suitability:========== - Tuples are by convention used when we need to store hetrogenous elements. Example1: Decleration of tuple and Accessing ================================================== Let's see how can we declear a tuple
###Code
food= ("biryani", "qorma","pulao")
print("Output:")
print(food)
print(type(food))
print("index[0]=",food[0])
print("index[1]=",food[1])
print("index[2]=",food[2])
#immutable this is not allowed in tuple (you can uncomment and check)
#food[0]="nihari"
###Output
Output:
('biryani', 'qorma', 'pulao')
<class 'tuple'>
index[0]= biryani
index[1]= qorma
index[2]= pulao
###Markdown
Example2: Finding length of a tuple=======================================
###Code
food= ("biryani", "qorma","pulao")
print("Length=",len(food))
###Output
Length= 3
###Markdown
Example3: Converting a list to tuple
###Code
list1= ["a","b","c"]
print("Output:")
print("type of list1",type(list1))
tuple1= tuple(list1)
print("type of tuple 1:", type(tuple1))
###Output
Output:
type of list1 <class 'list'>
type of tuple 1: <class 'tuple'>
###Markdown
Example4: Concatinating tuples=======================================
###Code
food= ("biryani", "qorma","pulao")
fast_food=("burger","shawarma","pizza")
concatinating_tuple=food+fast_food
print("Output:")
print(concatinating_tuple)
###Output
Output:
('biryani', 'qorma', 'pulao', 'burger', 'shawarma', 'pizza')
###Markdown
Example5: Slicing in the tuple=======================================
###Code
food= ("biryani", "qorma","pulao")
print("Output:")
print(food[1:])
print(food[2:])
print(food[3:])
###Output
Output:
('qorma', 'pulao')
('pulao',)
()
###Markdown
2.4 Dictionary: =================== Defination:============= In python dictionary is an ordered set of key value pair (As of python 3.7 dictionary is ordered) . It is similer to a hash table where each key hold a specific value:- Key: It could be any primitive datatype in python.- Value: It is an aribitrary python object. Strength: ============= - Using python dictionaries we can store data in more descriptive from.- We can use it as a hash table in python. Weaknesses:============= - Dictionaries used to be unordered (but as of python 3.7 this weakness is removed)- Compared to other data structures dictionary consumes more storage. Suitability:============= - Are used when we have a data in which we need to map keys against some data- Is used in graphs for making adjustcency lists. Example1: Initializing a dictionary==================================== Here you can see how to decleare a dictionary and extracting values using indexs.
###Code
students={1:"mohsin",
2: "gama",
3:"ghazi",
4:"nomi"}
print("Type:",type(students))
print(students)
#here indexing begins with 1
print(students[1])
print(students[2])
print(students[3])
print(students[4])
###Output
Type: <class 'dict'>
{1: 'mohsin', 2: 'gama', 3: 'ghazi', 4: 'nomi'}
mohsin
gama
ghazi
nomi
###Markdown
Example2: Deleting a key-value pair in dictionary=====================================================
###Code
students={1:"mohsin",
2: "gama",
3:"ghazi",
4:"nomi"}
print("Before Deletion:\n",students)
del(students[3])
print("After Deletion:\n",students)
###Output
Before Deletion:
{1: 'mohsin', 2: 'gama', 3: 'ghazi', 4: 'nomi'}
After Deletion:
{1: 'mohsin', 2: 'gama', 4: 'nomi'}
###Markdown
Example3: Poping item from a Dictionary ============================================
###Code
students={1:"mohsin",
2: "gama",
3:"ghazi",
4:"nomi"}
print("Output:")
print("Before Deletion:\n",students)
result= students.pop(4)
#here value 4 is poped out and stored in the variable result
print("After Deletion:\n",students)
print("\nresult=",result)
###Output
Output:
Before Deletion:
{1: 'mohsin', 2: 'gama', 3: 'ghazi', 4: 'nomi'}
After Deletion:
{1: 'mohsin', 2: 'gama', 3: 'ghazi'}
result= nomi
###Markdown
Example4: Clearing all items in the dictionary using clear()================================================================
###Code
students={1:"mohsin",
2: "gama",
3:"ghazi",
4:"nomi"}
print("Output:")
print("Before Clearing:\n",students)
students.clear()
#we'll have a empty dictionary now
print("After Clearing:\n",students)
###Output
Output:
Before Clearing:
{1: 'mohsin', 2: 'gama', 3: 'ghazi', 4: 'nomi'}
After Clearing:
{}
###Markdown
2.5 Set: ============= Defination:============= In python we use sets for representing collection of unordered and unindexed data . Every element in the set is unique which means it cannot contain duplicate items. Set itself is mutable but the stored elements are immutable. Values in set are seprated by comma. Strength: ============= - Because sets cannot have multiple occurrences of the same element, it makes sets highly useful to efficiently remove duplicate values from a list or tuple and to perform common math operations like unions and intersections Weakness: ============= - Its not being ordred and indexed make accessibility less efficient here. Suitability:=============- Sets are suitable to use when we are dealing with membership testing and eliminating duplicate entries. Example1: Initializing a set==============================
###Code
stationary_set= {"pen","pencil","scale"}
print("Output:")
print(stationary_set)
print(type(stationary_set))
###Output
Output:
{'scale', 'pencil', 'pen'}
<class 'set'>
###Markdown
Example2: Adding value in the set=====================================
###Code
stationary_set= {"pen","pencil","scale"}
print("Output:")
print("Before adding value")
print(stationary_set)
stationary_set.add("eraser")
print("After adding value")
print(stationary_set)
###Output
Output:
Before adding value
{'scale', 'pencil', 'pen'}
After adding value
{'scale', 'pencil', 'eraser', 'pen'}
###Markdown
Example3: Removing values from the set=============================================
###Code
stationary_set= {"pen","pencil","scale"}
print("Output:")
print("Before removing value")
print(stationary_set)
stationary_set.discard("pen")
print("After after value")
print(stationary_set)
###Output
Output:
Before removing value
{'scale', 'pencil', 'pen'}
After after value
{'scale', 'pencil'}
###Markdown
Example4: Union of sets==========================
###Code
stationary_set_mohsin = {"pen","pencil","scale"}
stationary_set_tiyab = {"eraser","pencil","marker"}
union_set= stationary_set_mohsin|stationary_set_tiyab
print("Output:")
print(union_set)
###Output
Output:
{'scale', 'pencil', 'marker', 'eraser', 'pen'}
###Markdown
Example5: Intersection of sets================================
###Code
stationary_set_mohsin = {"pen","pencil","scale"}
stationary_set_tiyab = {"eraser","pencil","marker"}
intersection_set= stationary_set_mohsin & stationary_set_tiyab
print("Output:")
print(intersection_set)
###Output
Output:
{'pencil'}
###Markdown
2.6 Boolean: ================ Defination:============= In python boolean is a datatype which contains 2 possible values:- True it will not accept "true" as bool.- False it will not accept "false" as bool. this datatype belongs to bool class of python. bool is not the keyword in the python but it is advised not to use it for naming variables. Strength: =========== - It is quite useful for storing results of logical expressions. Suitability:============== - It is useful when we are dealing with logical expressions and decesion making. Example1: Initializing boolean variable=========================================
###Code
boolean_variable= True
print(type(boolean_variable))
###Output
<class 'bool'>
###Markdown
Example2: Storing result of logical expressions:=====================================================
###Code
number1= 10
number2= 20
result= number1>number2
print("Output:")
# 10>20 so it should return False
print("Result=", result)
result= number2>number1
# 20>10 so it should return True
print("Result=", result)
###Output
Output:
Result= False
Result= True
###Markdown
Example3: Boolean Operators ================================
###Code
#and operator
boolean_variable1= True
boolean_variable2= False
result= boolean_variable1 and boolean_variable2
print("Output:")
print("Result= ", result)
#or operator
boolean_variable1= True
boolean_variable2= False
result= boolean_variable1 or boolean_variable2
print("Result= ", result)
#and operator
boolean_variable1= True
result= not boolean_variable1
print("Result= ", result)
###Output
Output:
Result= False
Result= True
Result= False
###Markdown
3 Comparison Operators: ================================== Defination:============= As the name suggest comparison operators are used to compare to operands. In python we have followin comparison operators- Equal (==)- Not Equal (!=)- Greater than (<)- Less than (>)- Greater than or equal (<=)- Less than or equal (>=)when we perform these operators we get a boolean type varible. Example1: Perfroming Comparison Operators:==================================================
###Code
fruit1="Apples"
fruit2="Grapes"
## equal to operator
result=fruit1==fruit2
print("Output:")
print("Result=",result)
## not equal to operator
result=fruit1!=fruit2
print("Result=",result)
# greater than
number1= 10
number2= 20
result= number1>number2
print("Result=",result)
# Less than
number1= 10
number2= 20
result= number1<number2
print("Result=",result)
#Less than or equal
number1= 10
number2= 10
result= number1>=number2
print("Result=",result)
# Greater than or equal
number1= 10
number2= 10
result= number1<=number2
print("Result=",result)
###Output
Output:
Result= False
Result= True
Result= False
Result= True
Result= True
Result= True
###Markdown
4 if-else : ============ Like in other programming lanuages if-else statemens are used for conditional transfer of control in the program . if statement evaluates a logical expression and decide wether to transfered control to if-block or rest of the program wheares, in case of else the control is either transfered to if block or else block. Strength: =========== - Helps in redirecting control during the execution of program. Weakness:============== - Intrupt the normal flow of execution of the program. Suitability:============== - if-else is suitable to use when we are dealing with multi-outcome problems. Example1: Classifying Odd and Even number:==================================================
###Code
num=10
print("Output:")
if(num%2):
print("number is odd");
else:
print("number is even")
###Output
Output:
number is even
###Markdown
Example2: Finding Largest Number:======================================
###Code
num1= 10
num2= 20
print("Output:")
if(num1>num2):
print(f"{num1} is greater than {num2}")
else:
print(f"{num2} is greater than {num1}")
###Output
Output:
20 is greater than 10
###Markdown
4.1 elif : ======= elif is used when we want control to be transfered to more than 2 blocks. We can use multiple elif blocks and if condtion of certain blocks is fulfiled it is executed. If no condtion is fulfiled the control is transfered to else block. Example3: Finding Grade of Student :================================================
###Code
marks=50
print("Output:")
if(marks>=90):
print("your grade is A")
elif(marks>=70 and marks<90):
print("your grade is B")
elif(marks>=50 and marks<70):
print("your grade is C")
else:
print("your grade is F")
###Output
Output:
your grade is C
###Markdown
Example4: Ternary Operators (Short hand if-else) :=====================================================
###Code
num1 = 20
num2 = 30
print("Output:")
print(f"{num1} is greater") if num1 > num2 else print(f"{num2} is greater")
###Output
Output:
30 is greater
###Markdown
5 Loops: ============ Loops play an important role in every programming language. We use loop when we need to perform repitative tasks . In python we have two types of loops. - For Loop: - While Loop: Strength: =========== - Our program do not have repitative line of codes when we use loops. Weakness:============== - Although loops makes our work easy but they might increase the complexity of the program. Suitability:============== - When we are dealing with a problem in which we need to perform repitative tasks. 5.1 For Loop: ================= Another name of this loop is counter loop. We use for loop to iterating through a sequence. It also is used when we need to iterate a code block for the fixed number of times . Example1: Looping through a list:======================================
###Code
people= ["teacher", "student", "doctor", "engineer"]
print("Output:")
for i in people:
print(i)
#on every iteration i pick a value from the list of people.
###Output
Output:
teacher
student
doctor
engineer
###Markdown
Example2: Looping through a String:==============================
###Code
name="Mohsin"
print("Output:")
for i in name:
print(i)
###Output
Output:
M
o
h
s
i
n
###Markdown
Example3: range() function:======================== The range() function returns a sequence of numbers, starting from 0 by default, and increments by 1 (by default), and ends at a specified number. we can replace the starting, ending and increament values which are set by default.
###Code
print("Output:")
for num in range (6):
print("Pakistan")
###Output
Output:
Pakistan
Pakistan
Pakistan
Pakistan
Pakistan
Pakistan
###Markdown
Example4: Nested for loop:============================== we can use another for in loop inside a for loop for performing multiple iteration within an iteration. here is an example of nested for loop.
###Code
#printing triangle of astericks
print("Output:")
for i in range(0, 5):
for j in range(0, i+1):
print("* ",end="")
print("\r")
###Output
Output:
*
* *
* * *
* * * *
* * * * *
###Markdown
5.2 While Loop: ===================== Another name of this loop is conditional loop. This loop keep on iterating as long as a certain condtion is true . Example5 : Condtional Loop:=================================
###Code
#video game goes on until you have life greater than 0 it works on while loop:
life=10
while(life>0):
print(f"You have {life} lives remaining")
life=life-1
else:
print("Game over")
###Output
You have 10 lives remaining
You have 9 lives remaining
You have 8 lives remaining
You have 7 lives remaining
You have 6 lives remaining
You have 5 lives remaining
You have 4 lives remaining
You have 3 lives remaining
You have 2 lives remaining
You have 1 lives remaining
Game over
###Markdown
6 Function : ============ A function is defined as a block of code which only runs when we call it. We can pass data, known as parameters, into a function. A function can also return data as a result. In python we have two types of functions. - Built-in Functions: - User define Funcitons: Strength: =========== - Increase the reusability of code.- Gives modularity to our program.- Our code becomes organized if we work in modules. Weakness:============== - Call to a function and return statement takes couple of extra machine instructions which might increase the time complexity a littl bit Suitability:==============- we use functions when we need to use same block of code at multiple places. 6.1 Built-in Functions : As the name suggest these are by defualt functions in the python we do not need to import them or manually write them program. Example1 : Built-in Functions (Numeric):============================================
###Code
number=10
print("Output:")
#bin() is a built in function returns the binary value of the number
print("bin(10)= ",bin(10))
#abs() is a built in function return absolute value of a number
print("abs(10)= ",abs(10))
###Output
Output:
bin(10)= 0b1010
abs(10)= 10
###Markdown
Example2 : Built-in Functions (Objects):============================================
###Code
name= "Mohsin"
#type function tells the type of object
print("type(name)= ",type(name))
#len is a built in function returns the length of object
print("len(name)= ",len(name))
###Output
type(name)= <class 'str'>
len(name)= 6
###Markdown
6.2 User-defined Functions : These functions are defined by user and and later called at different places in the program. we can define following type of user-defined functions. - Parametered Functions: - Non Parametered Functions: - Void Functions: - Value-returning Functions: Example3 : Parametered Functions======================================= In these functions we pass some values which can be used in the the block of the function.
###Code
#defining a function which is taking a string a parameter
def print_string (string):
print(string)
string= "Mohsin Nazakat"
#calling a function and passing it a parameter "Mohsin Nazakat"
print("Output:")
print_string(string)
###Output
Output:
Mohsin Nazakat
###Markdown
Example4 : Non Parametered Functions=========================================== In these functions we do not pass any argument or parameter.
###Code
#defining a function which is taking no parameter
def print_string ():
print("Mohsin Nazakat")
#calling a function with no parameter/argument
print("Output:")
print_string()
###Output
Output:
Mohsin Nazakat
###Markdown
Example5 : Void Function=========================== These functions do not retrun anything.
###Code
# A function retrun no value
def print_string ():
for i in range(10):
print(f"{i+1} Mohsin Nazakat")
print("Output:")
print_string()
###Output
Output:
1 Mohsin Nazakat
2 Mohsin Nazakat
3 Mohsin Nazakat
4 Mohsin Nazakat
5 Mohsin Nazakat
6 Mohsin Nazakat
7 Mohsin Nazakat
8 Mohsin Nazakat
9 Mohsin Nazakat
10 Mohsin Nazakat
###Markdown
Example6 : Value-Returning Function=========================== These functions return some value.
###Code
#this function is returning some of 2 numbers
def add(a,b):
return a+b
print("Output:")
print("Sum=",add(5,2))
###Output
Output:
Sum= 7
###Markdown
7 Lambda Function : ============================ These are the function that can be created at runtime using the construct called "lambda". It consist of expressions which is executed at the runtime. These functions do not have a name and return nothing. There are fews functions which takes lambda(annoymous) function as an arguement, Some of them are: - map() - filter() - reduce() Strength: =========== - Lambad function enables us to pass function as an arguement.- These functions enable us to bring abstraction to the block of code. Weakness:============== - These functions just executes expressions and do not return anything. Suitability:==============- When we need to pass function as an aurgument to some other function, we use lambda functions. Example1 : Defining Lambda Function:============================================
###Code
#defining a lambda function:
half_number= lambda num: num/2
print("Output:")
half_number(10)
###Output
Output:
###Markdown
Example2 : Using lambda funtion for getting Multipliers:=============================================================
###Code
#fuction is multiplying argument with an unknown number
def multiplier(n):
return lambda a : a * n
#here we set multiplier to 5 and now we have a lambda function which generates multiple of 5
my_multiplier = multiplier(5)
print("Output:")
for i in range (10):
print(f"{5} * {i+1} = ",my_multiplier(i+1))
###Output
Output:
5 * 1 = 5
5 * 2 = 10
5 * 3 = 15
5 * 4 = 20
5 * 5 = 25
5 * 6 = 30
5 * 7 = 35
5 * 8 = 40
5 * 9 = 45
5 * 10 = 50
###Markdown
7.1 map () : ============== Map() function is used with two arguments. Just like: r = map(func, seq) . It applies the function func to all the elements of the sequence seq. After applying func on seq it generates and return an updated list.- func: it is function which is to be performed.- seq: This is the sequence on which the function is to be performed. Example3 : Multiplying list with a number n using map():===========================================================
###Code
my_list= [10,20,30,40,50]
updated_list= list(map(lambda num: num*2,my_list))
print("Output:")
print("Orignal_list=" , my_list)
print("Updated_list=" , updated_list)
###Output
Output:
Orignal_list= [10, 20, 30, 40, 50]
Updated_list= [20, 40, 60, 80, 100]
###Markdown
Example4 : Concatinating a word using map():====================================================
###Code
my_list= ["Mohsin", "Ghulam Rasool", "Noman", "AbuBakar"]
updated_list= list(map(lambda strr: strr+" Student",my_list))
print("Output:")
print("Orignal_list=" , my_list)
print("Updated_list=" , updated_list)
###Output
Output:
Orignal_list= ['Mohsin', 'Ghulam Rasool', 'Noman', 'AbuBakar']
Updated_list= ['Mohsin Student', 'Ghulam Rasool Student', 'Noman Student', 'AbuBakar Student']
###Markdown
7.2 filter () : ============== As the name suggest filter function helps us filtering elements from a list. it takes two arguements. - func: It is the condition on which the elements are filtered out.- seq: This is a given sequence. Example5 : Filtering odd numbers using filter():====================================================
###Code
my_list= [1,2,3,4,5,6,7,8,9,10]
odd_list= list(filter(lambda num: num%2 ,my_list))
print("Output:")
print("Orignal_list=" , my_list)
print("List of odd numbers=" , odd_list)
###Output
Output:
Orignal_list= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
List of odd numbers= [1, 3, 5, 7, 9]
###Markdown
Example5 : Fitering marks less than 50:==================================================
###Code
my_list= [90,80,30,15,50,45,100,70,35]
updated_list= list(filter(lambda num: num<50 ,my_list))
print("Output:")
print("Orignal_list=" , my_list)
print("Updated_list=" , updated_list)
###Output
Output:
Orignal_list= [90, 80, 30, 15, 50, 45, 100, 70, 35]
Updated_list= [30, 15, 45, 35]
###Markdown
7.3 reduce () : ==============The reduce() function in Python takes in a function and a list as an argument. The function is called with a lambda function and an iterable and a new reduced result is returned. This performs a repetitive operation over the pairs of the iterable . The reduce() function belongs to the functools module. It takes two arguements. - func: An interable function which is to be performed.- seq: This is a given sequence. Example6 : Finding sum of a series:========================================
###Code
from functools import reduce
#importing reduce function from the the funtools libarary
my_list= [90,80,30,15,50,45,100,70,35]
sum_series = reduce((lambda x, y: x + y), my_list)
print ("Sum of series= ", sum_series)
###Output
Sum of series= 515
###Markdown
Example7 : Largest number in the series:============================================
###Code
import functools
lis = [90,80,30,15,50,45,100,70,35]
# using reduce to compute maximum element from list
print ("The maximum element of the list is : ",end="")
print (functools.reduce(lambda a,b : a if a > b else b,lis))
###Output
The maximum element of the list is : 100
###Markdown
8 File I/O : =================We usually take input from either: - Buffer/Keyboard: - File 8.1 From Buffer/ Keyboard: ==============================we simple use input() function which stores a single line stored in buffer in the form of a string. By default it returns string but we can typecaste it in other datatype as well Example1 : Taking Input using keyboard / buffer:=====================================================
###Code
name = input("Enter your name:")
print("Type: ",type(name))
###Output
Enter your name:Mohsin Nazakat
Type: <class 'str'>
###Markdown
Example2 : Taking integer input:===================================
###Code
#taking input and casting it to integer
num1= int (input("Enter first number "))
num2= int (input("Enter second number "))
result =num1+num2
print("Output:")
print(result)
print("type:",type(result))
###Output
Enter first number 10
Enter second number 20
Output:
30
type: <class 'int'>
###Markdown
8.2 From File: ============== In python we can read/write a file in following modes: - r opens a file in read only mode.- r+ opens a file read and write mode.- w opens a file in write mode only.- a opens a file in append mode- a+ opens a file in append and read mode. Example3 : Reading input from file:=====================================
###Code
#createing a new file
f = open("mohsin.txt", "w")
#Inserting initial data in file
f = open("mohsin.txt", "r+")
f.write("Name: Mohsin Nazakat\nReg no: FA18-BCS-052")
f.close()
#opening file in r+ mode
my_file = open("mohsin.txt", "r+")
text=my_file.read()
#reading text from the file
print("Reading data from file----")
print("Output:")
print(text)
#closing the file
my_file.close()
###Output
Reading data from file----
Output:
Name: Mohsin Nazakat
Reg no: FA18-BCS-052
###Markdown
Example4 : Writing data to file:================================
###Code
#opening file in a+ mode
my_file = open("mohsin.txt", "a+")
text=my_file.write("\nDept: Computer Science ")
#closing the file
my_file.close()
my_file = open("mohsin.txt", "r+")
text=my_file.read()
#reading text from the file
print("Reading data from file----")
print("Output:")
print(text)
#closing the file
my_file.close()
###Output
Reading data from file----
Output:
Name: Mohsin Nazakat
Reg no: FA18-BCS-052
Dept: Computer Science
###Markdown
8.3 Positioning : =================== While reading or writing a file we deal with File handle which is like a cursor . It defines from where the data has to be read or written in the file. We have following functions in python which deal with position of file handle: - tell() : tells the current position of file handle.- seek() : change the current position of file handle. Example5 : Checking Current Position:=========================================
###Code
my_file= open("mohsin.txt","r+")
data= my_file.read(10)
print(data)
#checking postion of file handle
position= my_file.tell()
print("Position: ",position)
###Output
Name: Mohs
Position: 10
###Markdown
Example6 : Changing position of file handle:===============================================
###Code
#Now setting file handle to begining again
position= my_file.seek(0,0)
print(my_file.read(10))
my_file.close()
###Output
Name: Mohs
###Markdown
9 Introduction to Pandas : ==================================In computer programming, pandas is a software library written for the Python programming language for data manipulation and analysis . In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software released under the three-clause BSD license 9.1 History : ============== In 2008, pandas development began at AQR Capital Management. By the end of 2009 it had been open sourced. Since 2015, pandas is a NumFOCUS sponsored project. Timeline:- 2008 : Development of pandas started- 2009 : pandas becomes open source- 2012 : First edition of Python for Data Analysis is published- 2015 : pandas becomes a NumFOCUS sponsored project- 2018 : First in-person core developer sprint 9.2 Features : ============== - A fast and efficient DataFrame object for data manipulation with integrated indexing;- Tools for reading and writing data between in-memory data structures and different formats: CSV and text files, Microsoft Excel, SQL databases, and the fast HDF5 format;- Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form;- Flexible data alignment and pivoting of data sets;- Intelligent label-based slicing, fancy indexing , and subsetting of large data sets;- Columns can be inserted and deleted from data structures for size mutability ;- Aggregating or transforming data with a powerful group by engine allowing split-apply-combine operations on data sets;- High performance merging and joining of data sets;- Hierarchical axis indexing provides an intuitive way of working with high-dimensional data in a lower-dimensional data structure;- Time series -functionality: date range generation and frequency conversion, moving window statistics, date shifting and lagging. Even create domain-specific time offsets and join time series without losing data;- Highly optimized for performance , with critical code paths written in Cython or C.- Python with pandas is in use in a wide variety of academic and commercial domains, including Finance, Neuroscience, Economics, Statistics, Advertising, Web Analytics, and more. 9.3 Purpose : ============== The purpose of Pandas is make data analytics and manipulation:- Accessible to everyone- Free for users to use and modify- Flexible- Powerful- Easy to use- Fast 10 Series in Pandas : =============================== In python Series is a one-dimensional labeled array which can hold data of any type (integer, string, float, python objects, etc.). The axis labels are collectively called index. 10.1 Defination:================== class pandas.Series (data=None, index=None, dtype=None, name=None, copy=False, fastpath=False) wheares, - data : Contains data stored in Series. If data is a dict, argument order is maintained.- index : Values must be hashable and have the same length as data. Non-unique index values are allowed. Will default to RangeIndex (0, 1, 2, …, n) if not provided- dtype : Data type for the output Series. If not specified, this will be inferred from data. - name : The name to give to the Series.- copy : Copy input data. We can create series from - Array - dict - Scaler Value 10.2 Creating series from dict: ==================================
###Code
import pandas as pd
my_dict = {'a': 1, 'b': 2, 'c': 3}
my_series = pd.Series(data=my_dict, index=['a', 'b', 'c'])
print(my_series)
###Output
a 1
b 2
c 3
dtype: int64
###Markdown
10.3 Creating series from ndarray: ==================================== - In case of ndaray index must be the same length as data.- if no index is passed by default it will be set from [0,1....size -1] When index is specified:=====================
###Code
import numpy as np
#creating series with indexes.
my_series = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
print (my_series)
#here are the index we assigned to series
my_series.index
###Output
_____no_output_____
###Markdown
When index is not specified:========================
###Code
pd.Series(np.random.randn(5))
#here you can see that, the index is starting from 0,1... size-1
###Output
_____no_output_____
###Markdown
10.4 Creating series from scaler values: ==========================================
###Code
pd.Series(1, index=['a','b','c','d','e'])
###Output
_____no_output_____
###Markdown
10.5 Accessing Series : ========================================== By Index : ==========
###Code
print("Output:")
print("Values in series:")
for i in my_series:
print(i)
#or we can print it index-by-index
print("\nIndex-by-Index Values")
for i in range (my_series.size):
print(f"index[{i}]", my_series[i])
###Output
Output:
Values in series:
0.8526565292218028
-0.4459370591736289
-1.992237280562101
-1.883839682458662
-0.31708353340386874
Index-by-Index Values
index[0] 0.8526565292218028
index[1] -0.4459370591736289
index[2] -1.992237280562101
index[3] -1.883839682458662
index[4] -0.31708353340386874
###Markdown
By Multiple Index : ===============
###Code
#we can print selected indexes as well
print(my_series[[1,4,0]])
###Output
b -0.445937
e -0.317084
a 0.852657
dtype: float64
###Markdown
By Labeled Index : ================
###Code
print(my_series['a'])
print(my_series['b'])
print(my_series['c'])
print(my_series['d'])
print(my_series['e'])
print(my_series[['a','b','c']])
###Output
0.8526565292218028
-0.4459370591736289
-1.992237280562101
-1.883839682458662
-0.31708353340386874
a 0.852657
b -0.445937
c -1.992237
dtype: float64
###Markdown
By get method : ================
###Code
print(my_series.get('a'))
print(my_series.get('b'))
print(my_series.get('c'))
print(my_series.get('d'))
print(my_series.get('e'))
###Output
0.8526565292218028
-0.4459370591736289
-1.992237280562101
-1.883839682458662
-0.31708353340386874
###Markdown
10.6 Performing operations on series : ========================================== Finding Median : ================
###Code
print(my_series.median)
###Output
<bound method Series.median of a 0.852657
b -0.445937
c -1.992237
d -1.883840
e -0.317084
dtype: float64>
###Markdown
Finding exponent : ================
###Code
np.exp(my_series)
###Output
_____no_output_____
###Markdown
Poping elements from series : =========================
###Code
my_series1 =pd.Series(1, index=['a','b','c','d','e'])
print("Output:")
print("series before poping")
print(my_series1)
poped_value= my_series1.pop("b")
print("series after poping")
print(my_series1)
print("poped_value=", poped_value)
###Output
Output:
series before poping
a 1
b 1
c 1
d 1
e 1
dtype: int64
series after poping
a 1
c 1
d 1
e 1
dtype: int64
poped_value= 1
###Markdown
Shape and size of series series : ===========================
###Code
print("Shape of series: ",my_series.shape)
print("Size of series: ", my_series.size)
###Output
Shape of series: (5,)
Size of series: 5
###Markdown
10.7 Vectorized operations on series : ======================================== Addition of series: ================
###Code
my_series3 =pd.Series(1, index=['a','b','c','d','e'])
my_series4 =pd.Series(1, index=['a','b','c','d','e'])
print("series3\n",my_series3)
print("series4\n",my_series4)
print("sum of series \n",my_series3+my_series4)
# following will add the data of respective values of indexes. For example, in given output, it is calculate
# d as:
# a = s['a'] + s['a']
# b = s['b'] + s['b']
# c = s['c'] + s['c']
# d = s['d'] + s['d']
# e = s['e'] + s['e']
###Output
series3
a 1
b 1
c 1
d 1
e 1
dtype: int64
series4
a 1
b 1
c 1
d 1
e 1
dtype: int64
sum of series
a 2
b 2
c 2
d 2
e 2
dtype: int64
###Markdown
Scaler multiplication of series: =========================
###Code
print ("Output:")
print (my_series3*2)
#following will multiply the data of each values of indexes, with 2. For example, in given output, it is ca
#lculated as:
#a = s['a'] *2
#b = s['b'] *2
#c = s['c'] *2
#d = s['d'] *2
#e = s['e'] *2
###Output
Output:
a 2
b 2
c 2
d 2
e 2
dtype: int64
###Markdown
10.8 Naming of series : ==========================
###Code
my_series3 =pd.Series(1, index=['a','b','c','d','e'], name="random array")
print("Output:")
print("Name of series: ", my_series3.name)
###Output
Output:
Name of series: random array
###Markdown
Renaming series: ===================
###Code
my_series4 =my_series3.rename("super random array")
print("Output:")
print("Name of series:", my_series4.name)
###Output
Output:
Name of series: super random array
###Markdown
11 DataFrame in Pandas : =============================== In Pandas DataFrame is In case of two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns).Three major components of DataFrames are: - Rows - Columns - Data 11.1 Defination: ================== class pandas.DataFrame (data=None, index=None, columns=None, dtype=None, copy=False) wheares, - data : Dict can contain Series, arrays, constants, dataclass or list-like objects. If data is a dict, column order follows insertion-order. Changed in version 0.25.0: If data is a list of dicts, column order follows insertion-order - index : Index to use for resulting frame. Will default to RangeIndex if no indexing information part of input data and no index provided.- column : Column labels to use for resulting frame. Will default to RangeIndex (0, 1, 2, …, n) if no column labels are provided.- dtype : Data type to force. Only a single dtype is allowed. If None, infer.- copy : Copy data from inputs. Only affects DataFrame / 2d ndarray input. DataFrame accepts following kinds of inputs:- Dict of 1D ndarrys,lists,dicts or Series - 2D numpy.ndarray - Structured or record ndarray - A series - Another DataFrame 10.2 Creating DataFrame from List: ====================================== With no index and columns: ===============================
###Code
my_list=['biryani', 'pulao', 'karahi','saji']
my_dataframe= pd.DataFrame(my_list)
print("Output:")
my_dataframe
#default index and column are set
###Output
Output:
###Markdown
With provided index and columns: ===============================
###Code
my_list=['biryani', 'pulao', 'karahi','saji']
my_dataframe= pd.DataFrame(my_list, index=['a','b','c','d'], columns=['Dish'])
print("Output:")
my_dataframe
#Given index and column are set
###Output
Output:
###Markdown
11.3 From dict of series: ============================- The result index will be the union of the indexes of the various Series.- If there are any nested dicts, these will be first converted to Series.- If no columns are passed, the columns will be the sorted list of dict keys. Index given Column not given: ==========================
###Code
my_dict = {
'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])
}
my_dataFrame = pd.DataFrame(my_dict)
my_dataFrame
pd.DataFrame(my_dict, index=['d', 'b', 'a'])
###Output
_____no_output_____
###Markdown
Index and Column Both given: ==========================
###Code
pd.DataFrame(my_dict, index=['d', 'b', 'a'], columns=['two', 'three'])
###Output
_____no_output_____
###Markdown
Information about columns: ==========================
###Code
df.columns
###Output
_____no_output_____
###Markdown
11.4 From dict of ndarrays / lists : - The ndarrays must all be the same length.- If an index is passed, it must clearly also be the same length as the arrays.- If no index is passed, the result will be range(n), where n is the array length. Column and Index not given: ==========================
###Code
my_dict = {
'one' : [1., 2., 3., 4.],
'two' : [4., 3., 2., 1.]
}
pd.DataFrame(my_dict)
#As column is not given, so the key in sorted form will be used as column
#As index is not given so it'll range from 0 to size-1
###Output
_____no_output_____
###Markdown
Applying indexes: ===============
###Code
pd.DataFrame(my_dict, index=['a', 'b', 'c', 'd'])
###Output
_____no_output_____
###Markdown
Applying Column Lables: =====================
###Code
pd.DataFrame(my_dict, index=['a', 'b', 'c', 'd'], columns=['two', 'three'])
#as there is no recrod agains there it's giving NaN
###Output
_____no_output_____
###Markdown
11.5 From a list of dicts :
###Code
my_list = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
pd.DataFrame(my_list)
###Output
_____no_output_____
###Markdown
Adding Indexes: ===============
###Code
pd.DataFrame(my_list, index=['first', 'second'])
###Output
_____no_output_____
###Markdown
Passing Column Labels: ======================
###Code
pd.DataFrame(my_list, columns=['a', 'b'])
###Output
_____no_output_____
###Markdown
11.6 From a dict of tuples : =========================== You can automatically create a multi-indexed frame by passing a tuples dictionary
###Code
pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},
('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},
('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},
('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},
('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})
###Output
_____no_output_____
###Markdown
11.7 Column selection, addition, deletion : ============================================ - DataFrame can be treated semantically like a dict of like-indexed Series objects. Getting, setting, and deleting columns works with the same syntax as the analogous dict operations
###Code
my_dict = {
'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])
}
_dataFrame = pd.DataFrame(my_dict)
_dataFrame['one'] #it will display data of colum1
###Output
_____no_output_____
###Markdown
Assigning values to new columns: ==============================
###Code
_dataFrame['three'] = _dataFrame['one'] * _dataFrame['two'] # assigning values to a colomn named 'three' after calculation
my_dataFrame['flag'] = my_dataFrame['one'] > 2 #check if value at column 'one' is > 2 then assign True otherwise false
my_dataFrame #print a complete data frame
###Output
_____no_output_____
###Markdown
Deleting a Column: =================
###Code
del _dataFrame['two'] #delete a coloumn 'two' from data frame
###Output
_____no_output_____
###Markdown
Poping a column: ==================
###Code
three = _dataFrame.pop('three') #pop a complete coloumn 'three' from dataframe
_dataFrame
###Output
_____no_output_____
###Markdown
Inserting a scaler value: =======================
###Code
_dataFrame['yes'] = 'no' #new coloum yes will be progate wit no
_dataFrame
###Output
_____no_output_____
###Markdown
Taking value from a colum and placing in other: =========================================
###Code
_dataFrame['super'] = _dataFrame['yes'][:2]
_dataFrame
#taking first 2 values from column yes and placing in super column
###Output
_____no_output_____
###Markdown
Inserting new column using Insert Function: =====================================
###Code
_dataFrame.insert(1, 'hello', _dataFrame['one'])
# it will add a new column of name hello and copy data of column two in it
_dataFrame
###Output
_____no_output_____
###Markdown
11.8 Indexing / Selection : ===========================
###Code
_dataFrame.loc['b'] #it will return the coloumn labels and values on row label 'b'
_dataFrame.iloc[1] #it will return the values of those coloumns that is > than 1
###Output
_____no_output_____
###Markdown
11.9 Data Alignment and Arithmatic : ======================================
###Code
df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])
###Output
_____no_output_____
###Markdown
Adding Columns: ==============
###Code
df + df2 # add values of respective coloumn labels
###Output
_____no_output_____
###Markdown
Adding Rows: ============
###Code
df - df.iloc[1] #subtracting row1 from row1
###Output
_____no_output_____
###Markdown
Multiplication and Addition: =======================
###Code
df * 5 + 2
###Output
_____no_output_____
###Markdown
Division: =======
###Code
1 / df
###Output
_____no_output_____
###Markdown
Exponential: =======
###Code
df ** 4
###Output
_____no_output_____
###Markdown
11.10 Boolean Operators : ==========================
###Code
df1 = pd.DataFrame({'a' : [1, 0, 1], 'b' : [0, 1, 1] }, dtype=bool)
df2 = pd.DataFrame({'a' : [0, 1, 1], 'b' : [1, 1, 0] }, dtype=bool)
pd.DataFrame({'a' : [0, 1, 1], 'b' : [1, 1, 0] }, dtype=bool)
###Output
_____no_output_____
###Markdown
And Operator: ============
###Code
df1 & df2
###Output
_____no_output_____
###Markdown
OR Operator: ============
###Code
df1 | df2
###Output
_____no_output_____
###Markdown
NOT Operator: ============
###Code
-df1
-df2
###Output
_____no_output_____
###Markdown
11.11 Transpose : ===================
###Code
# we use T attribute for trnaspose
###Output
_____no_output_____
###Markdown
Transpose of Whole Table: =====================
###Code
df.T
###Output
_____no_output_____
###Markdown
Transpose of Selected rows: ========================
###Code
df[:4].T
###Output
_____no_output_____
###Markdown
12. Viewing Data in Pandas : ===================================== We can view data / display data in different ways: - See the top & bottom rows of the frame- Selecting a single column- Selecting via [], which slices the rows- For getting a cross section using a label- Selecting on a multi-axis by labe- Showing label slicing, both endpoints are included- Reduction in the dimensions of the returned object- For getting a scalar value- For getting fast access to a scalar- Select via the position of the passed integers- By integer slices, acting similar to numpy/python- By lists of integer position locations, similar to the numpy/python style- For slicing rows explicitly- For slicing columns explicitly- For getting a value explicitly- For getting fast access to a scalar- Using a single column’s values to select data.- Selecting values from a DataFrame where a boolean condition is met.- Using the isin() method for filtering 12.1 Head of Data: =======================
###Code
df.head(2)
#first 2 records
###Output
_____no_output_____
###Markdown
12.2 tail of Data: =====================
###Code
df.tail(2)
#last 2 records
###Output
_____no_output_____
###Markdown
12.3 Display Index: =====================
###Code
df.index
###Output
_____no_output_____
###Markdown
12.4 Display Column: ========================
###Code
df.columns
###Output
_____no_output_____
###Markdown
12.5 Printing Values: ======================
###Code
df.values
###Output
_____no_output_____
###Markdown
12.6 Sorting by axis: ======================
###Code
df.sort_index(axis=0, ascending=False)
###Output
_____no_output_____
###Markdown
12.7 Sorting by values: =========================
###Code
df.sort_values(by='B')
###Output
_____no_output_____
###Markdown
12.8 Describing DataFrame: ============================
###Code
df.describe()
###Output
_____no_output_____
###Markdown
12.9 Selecting a Column: ============================
###Code
df['C']
#here's the data of column C
###Output
_____no_output_____
###Markdown
12.10 Slicing: ==================
###Code
df[0:7]
# By lists of integer position locations, similar to the numpy/python style
df.iloc[[1,2,4],[0,2]]
# For slicing rows explicitly
df.iloc[:,1:2]
# For getting a value explicitly
df.iloc[1,2]
# Using a single column’s values to select data.
df[df.A > 0]
# Selecting values from a DataFrame where a boolean condition is met.
df[df > 0]
# Using the isin() method for filtering:
df2 = df.copy()
df2
###Output
_____no_output_____ |
binder_sandbox.ipynb | ###Markdown
Please read the [README](https://github.com/ibudiselic/covid/blob/master/README.md) file in this repository.
###Code
# This shows the actual values on hover (in the bottom right of the chart), and similar basic interactivity.
# It can be pretty slow for many countries, (if it is too slow, flip it to `False`).
# If some charts don't render on the first run if using interactivity, run it again (yay JavaScript :).
INTERACTIVE_PLOTS = True
%run lib.ipynb
# Modify this however you like, and then click 'Cell > Run All' in the top menu.
# The country names must match the dataset exactly. See the list at the bottom.
countries_to_plot = ["Croatia", "Switzerland"]
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
# You can also modify these parameters if you like.
# The dataset starts on 2020-01-22, so dates before that won't work.
analyze_countries(absolute_date_comparison_start_date='2020-03-01', backpredict_days=5)
# List of all available countries, for reference.
countries_rows = []
for i, c in enumerate(sorted(db.countries.keys())):
if i%4 == 0:
countries_rows.append([])
countries_rows[-1].append(f'"{c}"')
for r in countries_rows:
print(''.join(c.ljust(30) for c in r))
###Output
"Afghanistan" "Albania" "Algeria" "Andorra"
"Angola" "Antigua and Barbuda" "Argentina" "Armenia"
"Australia" "Austria" "Azerbaijan" "Bahrain"
"Bangladesh" "Barbados" "Belarus" "Belgium"
"Belize" "Benin" "Bhutan" "Bolivia"
"Bosnia and Herzegovina" "Botswana" "Brazil" "Brunei"
"Bulgaria" "Burkina Faso" "Burundi" "Cabo Verde"
"Cambodia" "Cameroon" "Canada" "Central African Republic"
"Chad" "Chile" "China" "Colombia"
"Comoros" "Congo (Brazzaville)" "Congo (Kinshasa)" "Costa Rica"
"Cote d'Ivoire" "Croatia" "Cuba" "Cyprus"
"Czechia" "Denmark" "Djibouti" "Dominica"
"Dominican Republic" "Ecuador" "Egypt" "El Salvador"
"Equatorial Guinea" "Eritrea" "Estonia" "Ethiopia"
"Fiji" "Finland" "France" "Gabon"
"Georgia" "Germany" "Ghana" "Greece"
"Grenada" "Guatemala" "Guinea" "Guinea-Bissau"
"Guyana" "Haiti" "Holy See" "Honduras"
"Hungary" "Iceland" "India" "Indonesia"
"Iran" "Iraq" "Ireland" "Israel"
"Italy" "Jamaica" "Japan" "Jordan"
"Kazakhstan" "Kenya" "Korea, South" "Kuwait"
"Kyrgyzstan" "Laos" "Latvia" "Lebanon"
"Liberia" "Libya" "Liechtenstein" "Lithuania"
"Luxembourg" "Madagascar" "Malawi" "Malaysia"
"Maldives" "Mali" "Malta" "Mauritania"
"Mauritius" "Mexico" "Moldova" "Monaco"
"Mongolia" "Montenegro" "Morocco" "Mozambique"
"Namibia" "Nepal" "Netherlands" "New Zealand"
"Nicaragua" "Niger" "Nigeria" "North Macedonia"
"Norway" "Oman" "Pakistan" "Panama"
"Papua New Guinea" "Paraguay" "Peru" "Philippines"
"Poland" "Portugal" "Qatar" "Romania"
"Russia" "Rwanda" "Saint Lucia" "Saint Vincent and the Grenadines"
"San Marino" "Saudi Arabia" "Senegal" "Serbia"
"Seychelles" "Sierra Leone" "Singapore" "Slovakia"
"Slovenia" "Somalia" "South Africa" "South Sudan"
"Spain" "Sri Lanka" "Sudan" "Suriname"
"Sweden" "Switzerland" "Syria" "Taiwan*"
"Tajikistan" "Tanzania" "Thailand" "Timor-Leste"
"Togo" "Trinidad and Tobago" "Tunisia" "Turkey"
"US" "Uganda" "Ukraine" "United Arab Emirates"
"United Kingdom" "Uruguay" "Uzbekistan" "Venezuela"
"Vietnam" "Western Sahara" "Yemen" "Zambia"
"Zimbabwe"
|
photo2cartoon.ipynb | ###Markdown
###Code
!git clone https://github.com/minivision-ai/photo2cartoon.git
cd photo2cartoon
!gdown https://drive.google.com/uc?id=1lsQS8hOCquMFKJFhK_z-n03ixWGkjT2P
!unzip photo2cartoon_resources_20200504.zip
pip install face_alignment
!pip install tensorflow-gpu==1.14
!pip install tensorflow==1.14
!python test.py --photo_path ./images/photo_test.jpg --save_path ./images/cartoon_result.jpg
###Output
_____no_output_____
###Markdown
Day 3 作业--Pixel2Pixel:人像卡通化经过今天的学习,相信大家对图像翻译、风格迁移有了一定的了解啦,是不是也想自己动手来实现下呢?那么,为了满足大家动手实践的愿望,同时为了巩固大家学到的知识,我们Day 3的作业便是带大家完成一遍课程讲解过的应用--**Pixel2Pixel:人像卡通化**在本次作业中,大家需要做的是:**补齐代码,跑通训练,提交一张卡通化的成品图,动手完成自己的第一个人像卡通化的应用~**![](https://ai-studio-static-online.cdn.bcebos.com/6e3af14bf9f847ab92215753fb3b8f61a66186b538f44da78ca56627c35717b8) 准备工作:引入依赖 & 数据准备
###Code
import paddle
import paddle.nn as nn
from paddle.io import Dataset, DataLoader
import os
import cv2
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
###Output
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Sized
###Markdown
数据准备:- 真人数据来自[seeprettyface](http://www.seeprettyface.com/mydataset.html)。- 数据预处理(详情见[photo2cartoon](https://github.com/minivision-ai/photo2cartoon)项目)。 - 使用[photo2cartoon](https://github.com/minivision-ai/photo2cartoon)项目生成真人数据对应的卡通数据。
###Code
# 解压数据
!unzip -qao data/data79149/cartoon_A2B.zip -d data/
###Output
_____no_output_____
###Markdown
数据可视化
###Code
# 训练数据统计
train_names = os.listdir('data/cartoon_A2B/train')
print(f'训练集数据量: {len(train_names)}')
# 测试数据统计
test_names = os.listdir('data/cartoon_A2B/test')
print(f'测试集数据量: {len(test_names)}')
# 训练数据可视化
imgs = []
for img_name in np.random.choice(train_names, 3, replace=False):
imgs.append(cv2.imread('data/cartoon_A2B/train/'+img_name))
img_show = np.vstack(imgs)[:,:,::-1]
plt.figure(figsize=(10, 10))
plt.imshow(img_show)
plt.show()
class PairedData(Dataset):
def __init__(self, phase):
super(PairedData, self).__init__()
self.img_path_list = self.load_A2B_data(phase) # 获取数据列表
self.num_samples = len(self.img_path_list) # 数据量
def __getitem__(self, idx):
img_A2B = cv2.imread(self.img_path_list[idx]) # 读取一组数据
img_A2B = img_A2B.astype('float32') / 127.5 - 1. # 从0~255归一化至-1~1
img_A2B = img_A2B.transpose(2, 0, 1) # 维度变换HWC -> CHW
img_A = img_A2B[..., :256] # 真人照
img_B = img_A2B[..., 256:] # 卡通图
return img_A, img_B
def __len__(self):
return self.num_samples
@staticmethod
def load_A2B_data(phase):
assert phase in ['train', 'test'], "phase should be set within ['train', 'test']"
# 读取数据集,数据中每张图像包含照片和对应的卡通画。
data_path = 'data/cartoon_A2B/'+phase
return [os.path.join(data_path, x) for x in os.listdir(data_path)]
paired_dataset_train = PairedData('train')
paired_dataset_test = PairedData('test')
###Output
_____no_output_____
###Markdown
第一步:搭建生成器 请大家补齐空白处的代码,‘’ 后是提示。
###Code
class UnetGenerator(nn.Layer):
def __init__(self, input_nc=3, output_nc=3, ngf=64):
super(UnetGenerator, self).__init__()
self.down1 = nn.Conv2D(input_nc, ngf, kernel_size=4, stride=2, padding=1)
self.down2 = Downsample(ngf, ngf*2)
self.down3 = Downsample(ngf*2, ngf*4)
self.down4 = Downsample(ngf*4, ngf*8)
self.down5 = Downsample(ngf*8, ngf*8)
self.down6 = Downsample(ngf*8, ngf*8)
self.down7 = Downsample(ngf*8, ngf*8)
self.center = Downsample(ngf*8, ngf*8)
self.up7 = Upsample(ngf*8, ngf*8, use_dropout=True)
self.up6 = Upsample(ngf*8*2, ngf*8, use_dropout=True)
self.up5 = Upsample(ngf*8*2, ngf*8, use_dropout=True)
self.up4 = Upsample(ngf*8*2, ngf*8)
self.up3 = Upsample(ngf*8*2, ngf*4)
self.up2 = Upsample(ngf*4*2, ngf*2)
self.up1 = Upsample(ngf*2*2, ngf)
self.output_block = nn.Sequential(
nn.ReLU(),
nn.Conv2DTranspose(ngf*2, output_nc, kernel_size=4, stride=2, padding=1),
nn.Tanh()
)
def forward(self, x):
d1 = self.down1(x)
d2 = self.down2(d1)
d3 = self.down3(d2)
d4 = self.down4(d3)
d5 = self.down5(d4)
d6 = self.down6(d5)
d7 = self.down7(d6)
c = self.center(d7)
x = self.up7(c, d7)
x = self.up6(x, d6)
x = self.up5(x, d5)
x = self.up4(x, d4)
x = self.up3(x, d3)
x = self.up2(x, d2)
x = self.up1(x, d1)
x = self.output_block(x)
return x
class Downsample(nn.Layer):
# LeakyReLU => conv => batch norm
def __init__(self, in_dim, out_dim, kernel_size=4, stride=2, padding=1):
super(Downsample, self).__init__()
self.layers = nn.Sequential(
nn.LeakyReLU(0.2), # LeakyReLU, leaky=0.2
nn.Conv2D(in_dim, out_dim, kernel_size, stride, padding, bias_attr=False), # Conv2D
nn.BatchNorm2D(out_dim)
)
def forward(self, x):
x = self.layers(x)
return x
class Upsample(nn.Layer):
# ReLU => deconv => batch norm => dropout
def __init__(self, in_dim, out_dim, kernel_size=4, stride=2, padding=1, use_dropout=False):
super(Upsample, self).__init__()
sequence = [
nn.ReLU(), # ReLU
nn.Conv2DTranspose(in_dim, out_dim, kernel_size, stride, padding, bias_attr=False), # Conv2DTranspose
nn.BatchNorm2D(out_dim)
]
if use_dropout:
sequence.append(nn.Dropout(p=0.5))
self.layers = nn.Sequential(*sequence)
def forward(self, x, skip):
x = self.layers(x)
x = paddle.concat([x, skip], axis=1)
return x
###Output
_____no_output_____
###Markdown
第二步:鉴别器的搭建 请大家补齐空白处的代码,‘’ 后是提示。
###Code
class NLayerDiscriminator(nn.Layer):
def __init__(self, input_nc=6, ndf=64):
super(NLayerDiscriminator, self).__init__()
self.layers = nn.Sequential(
nn.Conv2D(input_nc, ndf, kernel_size=4, stride=2, padding=1),
nn.LeakyReLU(0.2),
ConvBlock(ndf, ndf*2),
ConvBlock(ndf*2, ndf*4),
ConvBlock(ndf*4, ndf*8, stride=1),
nn.Conv2D(ndf*8, 1, kernel_size=4, stride=1, padding=1),
nn.Sigmoid()
)
def forward(self, input):
return self.layers(input)
class ConvBlock(nn.Layer):
# conv => batch norm => LeakyReLU
def __init__(self, in_dim, out_dim, kernel_size=4, stride=2, padding=1):
super(ConvBlock, self).__init__()
self.layers = nn.Sequential(
nn.Conv2D(in_dim, out_dim, kernel_size, stride, padding, bias_attr=False), # Conv2D
nn.BatchNorm2D(out_dim), # BatchNorm2D
nn.LeakyReLU(0.2)
)
def forward(self, x):
x = self.layers(x)
return x
generator = UnetGenerator()
discriminator = NLayerDiscriminator()
out = generator(paddle.ones([1, 3, 256, 256]))
print('生成器输出尺寸:', out.shape) # 应为[1, 3, 256, 256]
out = discriminator(paddle.ones([1, 6, 256, 256]))
print('鉴别器输出尺寸:', out.shape) # 应为[1, 1, 30, 30]
# 超参数
LR = 1e-4
BATCH_SIZE = 8
EPOCHS = 100
# 优化器
optimizerG = paddle.optimizer.Adam(
learning_rate=LR,
parameters=generator.parameters(),
beta1=0.5,
beta2=0.999)
optimizerD = paddle.optimizer.Adam(
learning_rate=LR,
parameters=discriminator.parameters(),
beta1=0.5,
beta2=0.999)
# 损失函数
bce_loss = nn.BCELoss()
l1_loss = nn.L1Loss()
# dataloader
data_loader_train = DataLoader(
paired_dataset_train,
batch_size=BATCH_SIZE,
shuffle=True,
drop_last=True
)
data_loader_test = DataLoader(
paired_dataset_test,
batch_size=BATCH_SIZE
)
results_save_path = 'work/results'
os.makedirs(results_save_path, exist_ok=True) # 保存每个epoch的测试结果
weights_save_path = 'work/weights'
os.makedirs(weights_save_path, exist_ok=True) # 保存模型
for epoch in range(EPOCHS):
for data in tqdm(data_loader_train):
real_A, real_B = data
optimizerD.clear_grad()
# D([real_A, real_B])
real_AB = paddle.concat((real_A, real_B), 1)
d_real_predict = discriminator(real_AB)
d_real_loss = bce_loss(d_real_predict, paddle.ones_like(d_real_predict))
# D([real_A, fake_B])
fake_B = generator(real_A).detach()
fake_AB = paddle.concat((real_A, fake_B), 1)
d_fake_predict = discriminator(fake_AB)
d_fake_loss = bce_loss(d_fake_predict, paddle.zeros_like(d_fake_predict))
# train D
d_loss = (d_real_loss + d_fake_loss) / 2.
d_loss.backward()
optimizerD.step()
optimizerG.clear_grad()
# D([real_A, fake_B])
fake_B = generator(real_A)
fake_AB = paddle.concat((real_A, fake_B), 1)
g_fake_predict = discriminator(fake_AB)
g_bce_loss = bce_loss(g_fake_predict, paddle.ones_like(g_fake_predict))
g_l1_loss = l1_loss(fake_B, real_B) * 100.
g_loss = g_bce_loss + g_l1_loss * 1.
# train G
g_loss.backward()
optimizerG.step()
print(f'Epoch [{epoch+1}/{EPOCHS}] Loss D: {d_loss.numpy()}, Loss G: {g_loss.numpy()}')
if (epoch+1) % 10 == 0:
paddle.save(generator.state_dict(), os.path.join(weights_save_path, 'epoch'+str(epoch+1).zfill(3)+'.pdparams'))
# test
generator.eval()
with paddle.no_grad():
for data in data_loader_test:
real_A, real_B = data
break
fake_B = generator(real_A)
result = paddle.concat([real_A[:3], real_B[:3], fake_B[:3]], 3)
result = result.detach().numpy().transpose(0, 2, 3, 1)
result = np.vstack(result)
result = (result * 127.5 + 127.5).astype(np.uint8)
cv2.imwrite(os.path.join(results_save_path, 'epoch'+str(epoch+1).zfill(3)+'.png'), result)
generator.train()
###Output
100%|██████████| 170/170 [00:23<00:00, 7.22it/s]
1%| | 1/170 [00:00<00:24, 6.87it/s]
###Markdown
最后:用你补齐的代码试试卡通化的效果吧!
###Code
# 为生成器加载权重
results_save_path = 'work/results'
weights_save_path = 'work/weights'
last_weights_path = os.path.join(weights_save_path, sorted(os.listdir(weights_save_path))[-1])
print('加载权重:', last_weights_path)
model_state_dict = paddle.load(last_weights_path)
generator.load_dict(model_state_dict)
generator.eval()
# 读取数据
img_name='data/cartoon_A2B/test/01462.png'
img_A2B = cv2.imread(img_name)
img_A = img_A2B[:, :256] # 真人照
img_B = img_A2B[:, 256:] # 卡通图
g_input = img_A.astype('float32') / 127.5 - 1 # 归一化
g_input = g_input[np.newaxis, ...].transpose(0, 3, 1, 2) # NHWC -> NCHW
g_input = paddle.to_tensor(g_input) # numpy -> tensor
g_output = generator(g_input)
g_output = g_output.detach().numpy() # tensor -> numpy
g_output = g_output.transpose(0, 2, 3, 1)[0] # NCHW -> NHWC
g_output = g_output * 127.5 + 127.5 # 反归一化
g_output = g_output.astype(np.uint8)
img_show = np.hstack([img_A, g_output])[:,:,::-1]
plt.figure(figsize=(8, 8))
plt.imshow(img_show)
plt.show()
###Output
_____no_output_____ |
B2-NLP/Harish_NLP_Regular_expression.ipynb | ###Markdown
RegEx Character Class Demo
###Code
# Matching all the characters
# Returns list of all characters except digits in given text
all_char_find = re.findall(r"[a-zA-z]",text)
print(f"matches all character except digits in text and returns a list:- \n {all_char_find} \n")
# Using \w matches all the charcters in the string
all_word = re.findall(r"[\w]",text)
print(f"matches all character in text and returns a list:- \n {all_word} \n")
# Using \d matches all digits
all_digits = re.findall(r"[\d]",text)
print(f"matches all digits and returns a list:- \n {all_digits} \n")
# Using [^a-zA-Z] return non alphabet
non_alpha = re.findall(r"[^a-zA-Z]",text)
print(f" Return list of non-alpha but returns digits \n {non_alpha} \n")
# Using \s matches spaces in text
space = re.findall(r"\s",text)
print(f"matches all spaces in text:- \n {space} \n")
# Using \W matches non-character
non_char = re.findall(r"\W",text)
print(f"matches non-char:- \n {non_char} \n")
###Output
matches all character except digits in text and returns a list:-
['P', 'y', 't', 'h', 'o', 'n', 'i', 's', 'a', 'n', 'i', 'n', 't', 'e', 'r', 'p', 'r', 'e', 't', 'e', 'd', 'h', 'i', 'g', 'h', 'l', 'e', 'v', 'e', 'l', 'a', 'n', 'd', 'g', 'e', 'n', 'e', 'r', 'a', 'l', 'p', 'u', 'r', 'p', 'o', 's', 'e', 'p', 'r', 'o', 'g', 'r', 'a', 'm', 'm', 'i', 'n', 'g', 'l', 'a', 'n', 'g', 'u', 'a', 'g', 'e', 'C', 'r', 'e', 'a', 't', 'e', 'd', 'b', 'y', 'G', 'u', 'i', 'd', 'o', 'v', 'a', 'n', 'R', 'o', 's', 's', 'u', 'm', 'a', 'n', 'd', 'f', 'i', 'r', 's', 't', 'r', 'e', 'l', 'e', 'a', 's', 'e', 'd', 'i', 'n']
matches all character in text and returns a list:-
['P', 'y', 't', 'h', 'o', 'n', 'i', 's', 'a', 'n', 'i', 'n', 't', 'e', 'r', 'p', 'r', 'e', 't', 'e', 'd', 'h', 'i', 'g', 'h', 'l', 'e', 'v', 'e', 'l', 'a', 'n', 'd', 'g', 'e', 'n', 'e', 'r', 'a', 'l', 'p', 'u', 'r', 'p', 'o', 's', 'e', 'p', 'r', 'o', 'g', 'r', 'a', 'm', 'm', 'i', 'n', 'g', 'l', 'a', 'n', 'g', 'u', 'a', 'g', 'e', 'C', 'r', 'e', 'a', 't', 'e', 'd', 'b', 'y', 'G', 'u', 'i', 'd', 'o', 'v', 'a', 'n', 'R', 'o', 's', 's', 'u', 'm', 'a', 'n', 'd', 'f', 'i', 'r', 's', 't', 'r', 'e', 'l', 'e', 'a', 's', 'e', 'd', 'i', 'n', '1', '9', '9', '1']
matches all digits and returns a list:-
['1', '9', '9', '1']
Return list of non-alpha but returns digits
[' ', ' ', ' ', ',', ' ', '-', ' ', ' ', '-', ' ', ' ', '.', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', '1', '9', '9', '1']
matches all spaces in text:-
[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']
matches non-char:-
[' ', ' ', ' ', ',', ' ', '-', ' ', ' ', '-', ' ', ' ', '.', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']
###Markdown
Quantifier Demo
###Code
# Using \w with quantifiers
# bellow code returns the year 1991
most_4_word = re.findall(r"\d{,4}",text)
print(f"Matches a digit:- \n {most_4_word} \n")
# Using + quantifier
using_plus = re.findall(r"[\w]+",text)
print(f"Matches the words in text:- \n {using_plus} \n")
###Output
Matches a digit:-
['', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '1991', '']
Matches the words in text:-
['Python', 'is', 'an', 'interpreted', 'high', 'level', 'and', 'general', 'purpose', 'programming', 'language', 'Created', 'by', 'Guido', 'van', 'Rossum', 'and', 'first', 'released', 'in', '1991']
###Markdown
Finding URLS and Hashtags in text
###Code
test = "https://www.demo.com is a demo website . https://github.com is githubs url"
re.findall(r"http[s]?://(?:[w]{3})?(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+",test)
# Here ?: inside () means non-capturing groups
twitter = "hello world #Demo #RegEx "
re.findall(r"#[\w]+",twitter)
# Substitute the link with URL
re.sub(r"http[s]?://(?:[w]{3})?(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+","<URL>",test)
# Substitute the hashtags with String Hashtag
re.sub(r"#[\w]+","<Hashtag>",twitter)
###Output
_____no_output_____
###Markdown
Extracting the domain name of the email and the name of user from text
###Code
test1 = '''harish [email protected]
derick [email protected]'''
patrn = r'([\w]+)\s(?:\w+)@([A-Z0-9]+)\.(?:[A-Z]{2,4})'
re.findall(patrn,test1,re.IGNORECASE)
###Output
_____no_output_____
###Markdown
Clean the sentence and remove all unwanted spaces, commas, semi-colon and colon
###Code
sentence = """Split , this sentence ; into : words"""
# compile is used to store the pattern that are used frequently
pattern = re.compile(r'[,;:\s]+')
"".join(pattern.sub(' ',sentence))
###Output
_____no_output_____
###Markdown
Clean the tweet by removing user handle, urls, hashtags and other punctiations
###Code
tweet = '''Good advice! RT @TheNextWeb: What I would do differently if I was learning to code today http://t.co/lbwej0pxOd cc: @garybernhardt #rstats'''
def tweet_cleaner(text):
text = re.sub(r'http[s]?://(?:[w]{3})?(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+','',text)
text = re.sub(r"#\S+",'',text)
text = re.sub(r'RT | cc:','',text)
text = re.sub(r'@\S+','',text)
text = re.sub('\s+', ' ', text)
return text
tweet_cleaner(tweet)
###Output
_____no_output_____
###Markdown
Find adverb from text and extract them using regex
###Code
text_adv = "Good advice! What I would do differently if I was learning to code today"
re.findall(r'\w+ly',text_adv)
###Output
_____no_output_____ |
notebooks/vizAbsenceSz.ipynb | ###Markdown
Introduction to visualizing data in the eeghdf files Getting startedThe EEG is stored in hierachical data format (HDF5). This format is widely used, open, and supported in many languages, e.g., matlab, R, python, C, etc.Here, I will use the h5py library in python
###Code
# import libraries
from __future__ import print_function, division, unicode_literals
%matplotlib inline
# %matplotlib notebook # allows interactions
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import h5py
from pprint import pprint
import stacklineplot # local copy
# Make all the figures bigger and easier to see in this notebook
# matplotlib.rcParams['figure.figsize'] = (18.0, 12.0)
FIGSIZE = (12.0,8.0) # use with %matplotlib inline
matplotlib.rcParams['figure.figsize'] = FIGSIZE
###Output
_____no_output_____
###Markdown
Direct access via h5py libraryWe have written a helper library eeghdf to conveniently access these hdf5 files, but you do not need to rely upon it. You can access all the data via hdf5 libraries. Below, we show how this is done via the popular h5py library in python.The hdf5 data is stored hierachically in a file as a tree of keys and values, similar to how files are stored in a file system.It is possible to inspect the file using standard hdf5 tools.Below we show the keys and values associated with the root of the tree. This shows that there is a "patient" group and a group "record-0"
###Code
# first open the hdf5 file
hdf = h5py.File('../data/absence_epilepsy.eeghdf','r')
# show the groups at the root of the tree as a list
list(hdf.items())
###Output
_____no_output_____
###Markdown
We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.
###Code
list(hdf['patient'].attrs.items())
###Output
_____no_output_____
###Markdown
Now we look at how the waveform data is stored. By convention, the first record is called "record-0" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.
###Code
rec = hdf['record-0']
list(rec.attrs.items())
###Output
_____no_output_____
###Markdown
Arrays of dataArrays of data are stored in "datasets" which have an interface similar to that of numpy arrays
###Code
# here is the list of data arrays stored in the record
list(rec.items())
rec['physical_dimensions'][:]
rec['prefilters'][:]
rec['signal_digital_maxs'][:]
rec['signal_digital_mins'][:]
rec['signal_physical_maxs'][:]
###Output
_____no_output_____
###Markdown
We can also grab the actual waveform data and visualize it. Using the helper library for matplotlib stackplot.py. [More work is being done in the eegml-eegvis package for more sophisticated visualization.]
###Code
signals = rec['signals'] # signals raw sample data (signed integers)
labels = rec['signal_labels']
electrode_labels = [str(s,'ascii') for s in labels]
numbered_electrode_labels = ["%d:%s" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]
###Output
_____no_output_____
###Markdown
Simple visualization of EEG (brief absence seizure)
###Code
# choose a point in the waveform to show a seizure
stacklineplot.show_epoch_centered(signals, 1476,epoch_width_sec=15,chstart=0, chstop=19, fs=rec.attrs['sample_frequency'], ylabels=electrode_labels, yscale=3.0)
plt.title('Absence Seizure');
###Output
_____no_output_____
###Markdown
AnnotationsIt was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure.You can access the clinical annotations via rec['edf_annotations']
###Code
annot = rec['edf_annotations']
antext = [s.decode('utf-8') for s in annot['texts'][:]]
starts100ns = [xx for xx in annot['starts_100ns'][:]] # process the bytes into text and lists of start times
# Use pandas dataframe to allow for pretty display in the jupyter notebook
df = pd.DataFrame(data=antext, columns=['text']) # load into a pandas data frame
df['starts100ns'] = starts100ns
df['starts_sec'] = df['starts100ns']/10**7
del df['starts100ns']
###Output
_____no_output_____
###Markdown
It is easy then to find the annotations related to seizures
###Code
df[df.text.str.contains('sz',case=False)]
print('matplotlib.__version__:', matplotlib.__version__)
print('h5py.__version__', h5py.__version__)
print('pandas.__version__:', pd.__version__)
###Output
matplotlib.__version__: 2.1.2
h5py.__version__ 2.7.0
pandas.__version__: 0.20.3
###Markdown
Introduction to visualizing data in the eeghdf files Getting startedThe EEG is stored in hierachical data format (HDF5). This format is widely used, open, and supported in many languages, e.g., matlab, R, python, C, etc.Here, I will use the h5py library in python
###Code
# import libraries
from __future__ import print_function, division, unicode_literals
%matplotlib inline
# %matplotlib notebook # allows interactions
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import h5py
from pprint import pprint
import stacklineplot # local copy
# Make all the figures bigger and easier to see in this notebook
# matplotlib.rcParams['figure.figsize'] = (18.0, 12.0)
FIGSIZE = (12.0,8.0) # use with %matplotlib inline
matplotlib.rcParams['figure.figsize'] = FIGSIZE
###Output
_____no_output_____
###Markdown
Direct access via h5py libraryWe have written a helper library eeghdf to conveniently access these hdf5 files, but you do not need to rely upon it. You can access all the data via hdf5 libraries. Below, we show how this is done via the popular h5py library in python.The hdf5 data is stored hierachically in a file as a tree of keys and values, similar to how files are stored in a file system.It is possible to inspect the file using standard hdf5 tools.Below we show the keys and values associated with the root of the tree. This shows that there is a "patient" group and a group "record-0"
###Code
# first open the hdf5 file
hdf = h5py.File('../data/absence_epilepsy.eeghdf','r')
# show the groups at the root of the tree as a list
list(hdf.items())
###Output
_____no_output_____
###Markdown
We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.
###Code
list(hdf['patient'].attrs.items())
###Output
_____no_output_____
###Markdown
Now we look at how the waveform data is stored. By convention, the first record is called "record-0" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.
###Code
rec = hdf['record-0']
list(rec.attrs.items())
###Output
_____no_output_____
###Markdown
Arrays of dataArrays of data are stored in "datasets" which have an interface similar to that of numpy arrays
###Code
# here is the list of data arrays stored in the record
list(rec.items())
rec['physical_dimensions'][:]
rec['prefilters'][:]
rec['signal_digital_maxs'][:]
rec['signal_digital_mins'][:]
rec['signal_physical_maxs'][:]
###Output
_____no_output_____
###Markdown
We can also grab the actual waveform data and visualize it. Using the helper library for matplotlib stackplot.py. [More work is being done in the eegml-eegvis package for more sophisticated visualization.]
###Code
signals = rec['signals'] # signals raw sample data (signed integers)
labels = rec['signal_labels']
electrode_labels = [str(s,'ascii') for s in labels]
numbered_electrode_labels = ["%d:%s" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]
###Output
_____no_output_____
###Markdown
Simple visualization of EEG (brief absence seizure)
###Code
# choose a point in the waveform to show a seizure
stacklineplot.show_epoch_centered(signals, 1476,epoch_width_sec=15,chstart=0, chstop=19, fs=rec.attrs['sample_frequency'], ylabels=electrode_labels, yscale=3.0)
plt.title('Absence Seizure');
###Output
_____no_output_____
###Markdown
AnnotationsIt was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure.You can access the clinical annotations via rec['edf_annotations']
###Code
annot = rec['edf_annotations']
antext = [s.decode('utf-8') for s in annot['texts'][:]]
starts100ns = [xx for xx in annot['starts_100ns'][:]] # process the bytes into text and lists of start times
# Use pandas dataframe to allow for pretty display in the jupyter notebook
df = pd.DataFrame(data=antext, columns=['text']) # load into a pandas data frame
df['starts100ns'] = starts100ns
df['starts_sec'] = df['starts100ns']/10**7
del df['starts100ns']
###Output
_____no_output_____
###Markdown
It is easy then to find the annotations related to seizures
###Code
df[df.text.str.contains('sz',case=False)]
print('matplotlib.__version__:', matplotlib.__version__)
print('h5py.__version__', h5py.__version__)
print('pandas.__version__:', pd.__version__)
###Output
matplotlib.__version__: 2.1.2
h5py.__version__ 2.7.0
pandas.__version__: 0.20.3
|
python/011.ipynb | ###Markdown
Problem 011In the 20×20 grid below, four numbers along a diagonal line have been marked in red.```08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 0849 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 0081 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 6552 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 9122 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 8024 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 5032 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 7067 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 2124 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 7221 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 9578 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 9216 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 5786 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 5819 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 4004 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 6688 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 6904 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 3620 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 1620 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 5401 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48```The product of these numbers is $26 \times 63 \times 78 \times 14 = 1788696$.What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid? Solution We use Numpy, specifically to store our grid as a matrix.
###Code
import numpy as np
grid = np.array([
[ 8, 2,22,97,38,15, 0,40, 0,75, 4, 5, 7,78,52,12,50,77,91, 8],
[49,49,99,40,17,81,18,57,60,87,17,40,98,43,69,48, 4,56,62, 0],
[81,49,31,73,55,79,14,29,93,71,40,67,53,88,30, 3,49,13,36,65],
[52,70,95,23, 4,60,11,42,69,24,68,56, 1,32,56,71,37, 2,36,91],
[22,31,16,71,51,67,63,89,41,92,36,54,22,40,40,28,66,33,13,80],
[24,47,32,60,99, 3,45, 2,44,75,33,53,78,36,84,20,35,17,12,50],
[32,98,81,28,64,23,67,10,26,38,40,67,59,54,70,66,18,38,64,70],
[67,26,20,68, 2,62,12,20,95,63,94,39,63, 8,40,91,66,49,94,21],
[24,55,58, 5,66,73,99,26,97,17,78,78,96,83,14,88,34,89,63,72],
[21,36,23, 9,75, 0,76,44,20,45,35,14, 0,61,33,97,34,31,33,95],
[78,17,53,28,22,75,31,67,15,94, 3,80, 4,62,16,14, 9,53,56,92],
[16,39, 5,42,96,35,31,47,55,58,88,24, 0,17,54,24,36,29,85,57],
[86,56, 0,48,35,71,89, 7, 5,44,44,37,44,60,21,58,51,54,17,58],
[19,80,81,68, 5,94,47,69,28,73,92,13,86,52,17,77, 4,89,55,40],
[ 4,52, 8,83,97,35,99,16, 7,97,57,32,16,26,26,79,33,27,98,66],
[88,36,68,87,57,62,20,72, 3,46,33,67,46,55,12,32,63,93,53,69],
[ 4,42,16,73,38,25,39,11,24,94,72,18, 8,46,29,32,40,62,76,36],
[20,69,36,41,72,30,23,88,34,62,99,69,82,67,59,85,74, 4,36,16],
[20,73,35,29,78,31,90, 1,74,31,49,71,48,86,81,16,23,57, 5,54],
[ 1,70,54,71,83,51,54,69,16,92,33,48,61,43,52, 1,89,19,67,48],
])
###Output
_____no_output_____
###Markdown
Here's what we're going to do. At some index, say $(i, j)$, we want to look right, down, and both up-right and bottom-right diagonals four units each, and then multiply. We're going to look at the largest number obtained. This is described in the function below.
###Code
def max_product(i, j):
return max([
np.prod(grid[i:i + 4, j]),
np.prod(grid[i, j:j + 4]),
np.prod(np.diag(grid[i:i + 4, j:j + 4])),
np.prod(np.diag(grid[i:i + 4, j:j + 4][::-1]))
])
###Output
_____no_output_____
###Markdown
Now, to prevent our matrix from looping around, we're going to pad our matrix with zeros.
###Code
grid = np.pad(grid, pad_width=4, mode='constant', constant_values=0)
###Output
_____no_output_____
###Markdown
Now, we just loop through the matrix.
###Code
max_prod = 0
for i in range(grid.shape[0]):
for j in range(grid.shape[1]):
if max_product(i, j) > max_prod:
max_prod = max_product(i, j)
max_prod
###Output
_____no_output_____ |
examples/question_generation_example.ipynb | ###Markdown
Question Generator example First we need to install HuggingFace's transformers library.
###Code
!pip install transformers
###Output
Collecting transformers
[?25l Downloading https://files.pythonhosted.org/packages/27/3c/91ed8f5c4e7ef3227b4119200fc0ed4b4fd965b1f0172021c25701087825/transformers-3.0.2-py3-none-any.whl (769kB)
[K |████████████████████████████████| 778kB 2.8MB/s
[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)
[K |████████████████████████████████| 890kB 16.2MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)
Collecting tokenizers==0.8.1.rc1
[?25l Downloading https://files.pythonhosted.org/packages/40/d0/30d5f8d221a0ed981a186c8eb986ce1c94e3a6e87f994eae9f4aa5250217/tokenizers-0.8.1rc1-cp36-cp36m-manylinux1_x86_64.whl (3.0MB)
[K |████████████████████████████████| 3.0MB 13.9MB/s
[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)
Collecting sentencepiece!=0.1.92
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 24.4MB/s
[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.16.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Building wheels for collected packages: sacremoses
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893260 sha256=de9a3144ba8697a3da6def5608842dbb4a8adf084343c3b3bd6bb537ff17ab16
Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
Successfully built sacremoses
Installing collected packages: sacremoses, tokenizers, sentencepiece, transformers
Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc1 transformers-3.0.2
###Markdown
Next we have to clone the github repo and import `questiongenerator`:
###Code
!git clone https://github.com/amontgomerie/question_generator/
%cd question_generator/
%load questiongenerator.py
from questiongenerator import QuestionGenerator
from questiongenerator import print_qa
###Output
[WinError 2] The system cannot find the file specified: 'question_generator/'
E:\NLP\squad\question_generator\examples
###Markdown
Make sure that we're using the GPU:
###Code
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
assert device == torch.device('cuda'), "Not using CUDA. Set: Runtime > Change runtime type > Hardware Accelerator: GPU"
###Output
_____no_output_____
###Markdown
Now we can create a `QuestionGenerator` and feed it some text. We are going to use a BBC article about Twitter getting hacked.The models should be automatically loaded when instantiating the `QuestionGenerator` class, but if you have them saved somewhere else you can pass the path to the folder containing them as an argument like `QuestionGenerator(MODEL_DIR)`.
###Code
qg = QuestionGenerator()
with open('articles/indian_matchmaking.txt', 'r') as a:
article = a.read()
###Output
_____no_output_____
###Markdown
Now We can call `QuestionGenerator`'s `generate()` method. We can choose an answer style from `['all', 'sentences', 'multiple_choice']`. You can choose how many questions you want to generate by setting `num_questions`. Note that the quality of questions may decrease if `num_questions` is high.If you just want to print the questions without showing the answers, you can optionally set `show_answers=False` when calling `print_qa()`.
###Code
qa_list = qg.generate(
article,
num_questions=10,
answer_style='all'
)
print_qa(qa_list)
###Output
Generating questions...
Evaluating QA pairs...
1) Q: What would have been offended if Sima Aunty spoke about?
A: In fact, I would have been offended if Sima Aunty was woke and spoke about choice, body positivity and clean energy during matchmaking.
2) Q: What does she think of Indian Matchmaking?
A: " Ms Vetticad describes Indian Matchmaking as "occasionally insightful" and says "parts of it are hilarious because Ms Taparia's clients are such characters and she herself is so unaware of her own regressive mindset".
3) Q: What do parents do to find a suitable match?
A: Parents also trawl through matrimonial columns in newspapers to find a suitable match for their children.
4) Q: In what country does Sima taparia try to find suitable matches for her wealthy clients?
A: 1. Sima Aunty
2. US (correct)
3. Delhi
4. Netflix
5) Q: What is the reason why she is being called out?
A: No wonder, then, that critics have called her out on social media for promoting sexism, and memes and jokes have been shared about "Sima aunty" and her "picky" clients.
6) Q: who describes Indian Matchmaking as "occasionally insightful"?
A: 1. Kiran Lamba Jha
2. Sima Taparia
3. Anna MM Vetticad
4. Ms Taparia's (correct)
7) Q: In what country does Sima taparia try to find suitable matches?
A: 1. Netflix
2. Delhi
3. US
4. India (correct)
8) Q: What is the story's true merit?
A: And, as writer Devaiah Bopanna points out in an Instagram post, that is where its true merit lies.
9) Q: What does Ms Vetticad think of Indian Matchmaking?
A: But an absence of caveats, she says, makes it "problematic".
10) Q: Who is the role of matchmaker?
A: Traditionally, matchmaking has been the job of family priests, relatives and neighbourhood aunties.
###Markdown
Question Generator example First we need to install HuggingFace's transformers library.
###Code
!pip install transformers
###Output
Collecting transformers
[?25l Downloading https://files.pythonhosted.org/packages/27/3c/91ed8f5c4e7ef3227b4119200fc0ed4b4fd965b1f0172021c25701087825/transformers-3.0.2-py3-none-any.whl (769kB)
[K |████████████████████████████████| 778kB 2.8MB/s
[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)
[K |████████████████████████████████| 890kB 16.2MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)
Collecting tokenizers==0.8.1.rc1
[?25l Downloading https://files.pythonhosted.org/packages/40/d0/30d5f8d221a0ed981a186c8eb986ce1c94e3a6e87f994eae9f4aa5250217/tokenizers-0.8.1rc1-cp36-cp36m-manylinux1_x86_64.whl (3.0MB)
[K |████████████████████████████████| 3.0MB 13.9MB/s
[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)
Collecting sentencepiece!=0.1.92
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 24.4MB/s
[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.16.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Building wheels for collected packages: sacremoses
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893260 sha256=de9a3144ba8697a3da6def5608842dbb4a8adf084343c3b3bd6bb537ff17ab16
Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
Successfully built sacremoses
Installing collected packages: sacremoses, tokenizers, sentencepiece, transformers
Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc1 transformers-3.0.2
###Markdown
Next we have to clone the github repo and import `questiongenerator`:
###Code
!git clone https://github.com/amontgomerie/question_generator/
%cd question_generator/
%load questiongenerator.py
from questiongenerator import QuestionGenerator
from questiongenerator import print_qa
###Output
/content/question_generator
###Markdown
Make sure that we're using the GPU:
###Code
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
assert device == torch.device('cuda'), "Not using CUDA. Set: Runtime > Change runtime type > Hardware Accelerator: GPU"
###Output
_____no_output_____
###Markdown
Now we can create a `QuestionGenerator` and feed it some text. We are going to use a BBC article about Twitter getting hacked.The models should be automatically loaded when instantiating the `QuestionGenerator` class, but if you have them saved somewhere else you can pass the path to the folder containing them as an argument like `QuestionGenerator(MODEL_DIR)`.
###Code
qg = QuestionGenerator()
with open('articles/indian_matchmaking.txt', 'r') as a:
article = a.read()
###Output
_____no_output_____
###Markdown
Now We can call `QuestionGenerator`'s `generate()` method. We can choose an answer style from `['all', 'sentences', 'multiple_choice']`. You can choose how many questions you want to generate by setting `num_questions`. Note that the quality of questions may decrease if `num_questions` is high.If you just want to print the questions without showing the answers, you can optionally set `show_answers=False` when calling `print_qa()`.
###Code
qa_list = qg.generate(
article,
num_questions=10,
answer_style='all'
)
print_qa(qa_list)
###Output
Generating questions...
Evaluating QA pairs...
1) Q: What would have been offended if Sima Aunty spoke about?
A: In fact, I would have been offended if Sima Aunty was woke and spoke about choice, body positivity and clean energy during matchmaking.
2) Q: What does she think of Indian Matchmaking?
A: " Ms Vetticad describes Indian Matchmaking as "occasionally insightful" and says "parts of it are hilarious because Ms Taparia's clients are such characters and she herself is so unaware of her own regressive mindset".
3) Q: What do parents do to find a suitable match?
A: Parents also trawl through matrimonial columns in newspapers to find a suitable match for their children.
4) Q: In what country does Sima taparia try to find suitable matches for her wealthy clients?
A: 1. Sima Aunty
2. US (correct)
3. Delhi
4. Netflix
5) Q: What is the reason why she is being called out?
A: No wonder, then, that critics have called her out on social media for promoting sexism, and memes and jokes have been shared about "Sima aunty" and her "picky" clients.
6) Q: who describes Indian Matchmaking as "occasionally insightful"?
A: 1. Kiran Lamba Jha
2. Sima Taparia
3. Anna MM Vetticad
4. Ms Taparia's (correct)
7) Q: In what country does Sima taparia try to find suitable matches?
A: 1. Netflix
2. Delhi
3. US
4. India (correct)
8) Q: What is the story's true merit?
A: And, as writer Devaiah Bopanna points out in an Instagram post, that is where its true merit lies.
9) Q: What does Ms Vetticad think of Indian Matchmaking?
A: But an absence of caveats, she says, makes it "problematic".
10) Q: Who is the role of matchmaker?
A: Traditionally, matchmaking has been the job of family priests, relatives and neighbourhood aunties.
|
nbgrader/docs/source/user_guide/release/ps1/problem2.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____ |
SGDFlightDelayDataset.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/AnujArora23/FlightDelayML/blob/master/SGDFlightDelayDataset.ipynb) Flight Delay Prediction (Regression)These datasets are taken from Microsoft Azure Machine Learning Studio's sample datasets. It contains flight delay data for various airlines for the year 2013. There are two files uploaded as a compressed archive on my GitHub page:1) **Flight_Delays_Data.csv** : This contains arrival and departure details for various flights operated by 16 different airlines. The schema is pretty self explanatory but I will mention the important and slightly obscure columns:*OriginAirportID/DestAirportID* : The unique 5 digit integer identifier for a particular airport.*CRSDepTime/CRSArrTime* : Time in 24 hour format (e.g. 837 is 08:37AM)*ArrDel15/DepDel15* : Binary columns where *1* means that the flight was delayed beyond 15 minutes and *0* means it was not.*ArrDelay/DepDelay* : Time (in minutes) by which flight was delayed.2) **Airport_Codes_Dataset.csv** : This file gives the city, state and name of the airport along with the unique 5 digit integer identifier. Goals:**1. Clean the data, and see which features may be important and which might be redundant.****2. Do an exploratory analysis of the data to identify where most of the flight delays lie (e.g. which carrier, airport etc.).****3. Choose and build an appropriate regression model for this dataset to predict *ArrDelay* time in minutes.****4. Choose and build alternative models and compare all models with various accuracy metrics.** Install and import necessary libraries
###Code
!pip install -U -q PyDrive #Only if you are loading your data from Google Drive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.linear_model import SGDRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn import grid_search
from sklearn import metrics
###Output
/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
/usr/local/lib/python2.7/dist-packages/sklearn/grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
DeprecationWarning)
###Markdown
Authorize Google Drive (if your data is stored in Drive)
###Code
%%capture
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
###Output
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&access_type=offline
Enter verification code: ··········
###Markdown
Data IngestionI have saved the two files in my personal drive storage and read them from there into a pandas data frame. Please modify the following cells to read the CSV files into a Pandas dataframe as per your storage location.
###Code
%%capture
downloaded = drive.CreateFile({'id':'1VxxZFZO7copAM_AHHF42zjO7rlGR1aPm'}) # replace the id with id of file you want to access
downloaded.GetContentFile('Airport_Codes_Dataset.csv')
downloaded = drive.CreateFile({'id':'1owzv86uWVRace_8xvRShFrDTRXSljp3I'}) # replace the id with id of file you want to access
downloaded.GetContentFile('Flight_Delays_Data.csv')
airpcode = pd.read_csv('Airport_Codes_Dataset.csv')
flightdel = pd.read_csv('Flight_Delays_Data.csv')
###Output
_____no_output_____
###Markdown
Data Cleanup Remove NULL /NaN rows and drop redundant columns
###Code
flightdel.dropna(inplace=True) #Drop NaNs. We will still have enough data
flightdel.drop(['Year','Cancelled'],axis=1,inplace=True) #There is only 1 unique value for both (2013 and 0 respectively)
flightdel.reset_index(drop=True,inplace=True)
###Output
_____no_output_____
###Markdown
Join the 2 CSV files to get airport code details for origin and destination
###Code
result=pd.merge(flightdel,airpcode,left_on='OriginAirportID',right_on='airport_id',how='left')
result.drop(['airport_id'],axis=1,inplace=True)
#result.reset_index(drop=True,inplace=True)
result.rename(columns={'city':'cityor','state':'stateor','name':'nameor'},inplace=True)
result=pd.merge(result,airpcode,left_on='DestAirportID',right_on='airport_id',how='left')
result.drop(['airport_id'],axis=1,inplace=True)
result.reset_index(drop=True,inplace=True)
result.rename(columns={'city':'citydest','state':'statedest','name':'namedest'},inplace=True)
flightdelfin=result
###Output
_____no_output_____
###Markdown
Perform Feature Conversion (to categorical dtype)
###Code
cols=['Carrier','DepDel15','ArrDel15','OriginAirportID','DestAirportID','cityor','stateor','nameor','citydest','statedest','namedest']
flightdelfin[cols]=flightdelfin[cols].apply(lambda x: x.astype('category'))
###Output
_____no_output_____
###Markdown
Drop duplicate observations
###Code
flightdelfin.drop_duplicates(keep='first',inplace=True)
flightdelfin.reset_index(drop=True,inplace=True)
###Output
_____no_output_____
###Markdown
Drop columns that are unnecessary for analysis **In particular, we drop ArrDel15 and DepDel15 columns as they add no extra information from the ArrDel and DepDel columns respectively.**
###Code
flightdelan=flightdelfin.iloc[:,0:11]
flightdelan.drop('DepDel15',axis=1,inplace=True)
flightdelan.head()
###Output
_____no_output_____
###Markdown
Final check before analysis ** We check if our data types are correct and do a general scan of the dataframe information. It looks good! Everything is as it should be.**
###Code
flightdelan.info()
#flightdelan[['Month','DayofMonth','DayOfWeek']]=flightdelan[['Month','DayofMonth','DayOfWeek']].apply(lambda x: x.astype(np.int64))
###Output
_____no_output_____
###Markdown
Data Exploration Number of flights per carrier in 2013
###Code
#fig=plt.figure(figsize=(10, 8), dpi= 80, facecolor='w', edgecolor='k'); #Use this for a larger size image
bp=flightdelan.Carrier.value_counts(sort=True, ascending=False).plot(kind='bar',title='Number of Flights per Carrier')
plt.show()
###Output
_____no_output_____
###Markdown
**As we can see above the highest number of flights are operated by WN i.e. *Southwest Airlines*, followed by (not closely) DL i.e. *Delta Airlines*, whereas the third highest number of flights are by AA i.e. *American Airlines* which is almost similar to those of UA i.e. *United Airlines.*** Calculate the average arrival delay (minutes) for all the flights of each carrier
###Code
meanarrdel=flightdelan.groupby('Carrier',as_index=False)['ArrDelay'].mean().sort_values('ArrDelay',ascending=False)
meanarrdel.reset_index(drop=True,inplace=True)
meanarrdel.columns=['Carrier','MeanArrDelay']
meanarrdel
###Output
_____no_output_____
###Markdown
Scale and compare the number of flights and average arrival delay
###Code
min_max_scaler = preprocessing.MinMaxScaler()
meanarrdel[['MeanArrDelay']]=min_max_scaler.fit_transform(meanarrdel[['MeanArrDelay']])
flightno=pd.DataFrame(flightdelan.Carrier.value_counts(sort=True, ascending=False))
flightno.rename(columns={'Carrier':'NoofFlights'},inplace=True)
mergedscat=pd.merge(left=meanarrdel,right=flightno,right_index=True,left_on='Carrier',how='left')
mergedscat.columns=['Carrier','AverageArrDelay','Noofflights']
mergedscat[['Noofflights']]=min_max_scaler.fit_transform(mergedscat[['Noofflights']])
fig, ax = plt.subplots()
#fig=plt.figure(figsize=(10, 8), dpi= 80, facecolor='w', edgecolor='k');
scat=ax.scatter(mergedscat.AverageArrDelay, mergedscat.Noofflights)
for i, txt in enumerate(mergedscat.Carrier):
ax.annotate(txt, (mergedscat.AverageArrDelay[i],mergedscat.Noofflights[i]))
plt.xlabel('Average Arrival Delay (scaled:0-1)')
plt.ylabel('Number of Flights (scaled:0-1)')
plt.title('Scatterplot of Number of Flights vs Average Delay (1: Highest of all ,0: Lowest of all)')
plt.show()
###Output
_____no_output_____
###Markdown
**As we can see above, the carriers that have a large number of flights (WN,DL,AA) have fared well as they have relatively normal delays. However, even with a low number of flights F9 i.e. *Frontier Airlines* and MQ i.e. *Envoy Air* have the highest delays in the year. ****This points to some serious planning issues among these smaller airlines or maybe they are more easily affected by outliers, just because they have a lesser number of flights.** Boxplots of arrival delays (mins) by carrier. **Carriers are ordered by the number of flights, decreasing from left to right.**
###Code
fig=plt.figure(figsize=(20, 18), dpi= 60, facecolor='w', edgecolor='k');
box=sns.boxplot(x="Carrier", y="ArrDelay", data=flightdelan,order=flightno.index)
plt.show()
###Output
/usr/local/lib/python2.7/dist-packages/seaborn/categorical.py:454: FutureWarning: remove_na is deprecated and is a private function. Do not use.
box_data = remove_na(group_data)
###Markdown
**As you can see above, the number of outliers are huge and some of them (AA and HA) have had flight delays beyond 1500 minutes (25 hours). That is insane!****Also, B6 i.e. *Jetblue Airways* has the least range of outliers but their on time performance is, by no means, excellent. This would mean that they do not have inordinately long delays usually, but have small delays frequently.****For the scale of operations of MQ i.e. *Envoy Air* and HA i.e. *Hawaiian Airlines*, they should not have such a wide range of outliers. Given that they have a significantly low number of flights relatively (HA has the least!), they should be able to manage their operations better.** Percentage of flights delayed beyond 2 hours by carrier (relative to total number of flights)
###Code
del2hr=pd.DataFrame(flightdelan[flightdelan.ArrDelay>120].groupby('Carrier').count().sort_values('ArrDelay',ascending=False)['ArrDelay'])
del2hr=del2hr.merge(flightno,left_index=True,right_index=True,how='inner')
del2hr['Delayperc']=((del2hr.ArrDelay.values.astype(np.float)/del2hr.NoofFlights.values)*100)
del2hr.rename(columns={'ArrDelay':'Count'})
del2hr.sort_values('Delayperc',ascending=False)
###Output
_____no_output_____
###Markdown
**As we can see, the top airlines with flights delayed beyond 2 hours are VX i.e. *Virgin America* (3.8%), EV i.e. *ExpressJet AIrlines* (3.7%) and MQ i.e. *Envoy Air* (3.6%)****Whereas, the airlines with the least amount of flights delayed beyond 2 hours are AS i.e. *Alaska Airlines* (0.6%) and HA i.e. *Hawaiian Airlines* (0.5%).** Busiest routes of 2013 **We can see the busiest routes of 2013 below, in terms of flight frequency.**
###Code
citypairfreq=pd.DataFrame(flightdelan.groupby(['OriginAirportID','DestAirportID'])['ArrDelay'].count().sort_values(ascending=False))
citypairfreq.rename(columns = {'ArrDelay':'Flight_Frequency'},inplace=True)
avgcitydel=pd.DataFrame(flightdelan.groupby(['OriginAirportID','DestAirportID'])['ArrDelay'].mean())
avgcitydel.rename(columns = {'ArrDelay':'Avg_ArrivalDel'},inplace=True)
pairdel=citypairfreq.join(avgcitydel,how='inner').sort_values('Flight_Frequency',ascending=False)
pairdel.head()
###Output
_____no_output_____
###Markdown
**We are now going to see which city pairs these are by referring the original table. ****As can be seen below, the first place belongs to: San Franciso - Los Angeles and the second place is simply the reverse route.**
###Code
flightdelfin[(flightdelfin.OriginAirportID==14771)&(flightdelfin.DestAirportID==12892)].head(3)
###Output
_____no_output_____
###Markdown
**The second city pair (third after SFO-LA and LA-SFO) that is the most frequent is Kahului to Honolulu.**
###Code
flightdelfin[(flightdelfin.OriginAirportID==13830)&(flightdelfin.DestAirportID==12173)].head(3)
###Output
_____no_output_____
###Markdown
Prediction Convert Categorical Variables to Indicator Variables **To do any sort of prediction, we need to convert the categorical variables to dummy (indicator) variables and drop one group in for each categorical column in the original table, so as to get a baseline to compare to. If we do not drop one group from each categorical variable, our regression will fail due to multicollinearity.****The choice of which group(s) to drop is complete arbitrary but in our case, we will drop the carrier with the least number of flights i.e. Hawaiian Airlines (HA), and we will choose an arbitrary city pair with just 1 flight frequency to drop. As of now I have chosen the OriginAirportID as 14771 and the DestAirportID as 13871 to be dropped.**
###Code
flightdeldum=pd.get_dummies(flightdelan)
flightdeldum.drop(['Carrier_HA','OriginAirportID_14771','DestAirportID_13871'],axis=1,inplace=True)
flightdeldum.head()
###Output
_____no_output_____
###Markdown
**As one can see above, each cateogrical column has been converted to 'n' binary columns where n was the number of groups in that particular categorical column. For example, the carrier column has been split into 16 indicator columns (number of unique carriers) and one has been dropped ('Carrier_HA').****Similar logic can be applied to the DestAirportID and OriginAirportID categorical columns.****NOTE: The Month, DayofMonth and DayofWeek columns have not been converted to indicator variables because they are ORDINAL cateogorical variables and not nominal. There is a need to retain their ordering because the 2nd month comes after the 1st and so on. Hence, since their current form retains their natural ordering, we do not need to touch these columns.** Stochastic Gradient Descent (SGD) Regression **To predict the arrival delays for various combinations of the input variables and future unseen data, we need to perform a regression since the output variable (ArrDelay) is a continuous one.****The choice of an SGD regressor is a logical one as we have hundreds of features, and about 2.7 million observations. The classic multiple linear regression will take too long as the resultant data matrix will be too large to invert. Similar time complexity arguments can be made for simple Gradient Descent as that involves the calculation of the gradient at all observations. Hence, SGD will be suitable and much faster!**
###Code
scaler = preprocessing.StandardScaler()
flightdeldum[['CRSDepTime','CRSArrTime','DepDelay']]=scaler.fit_transform(flightdeldum[['CRSDepTime','CRSArrTime','DepDelay']])
y=flightdeldum.ArrDelay
X=flightdeldum.drop('ArrDelay',axis=1)
###Output
_____no_output_____
###Markdown
**In the cell above, we have scaled the relevant columns (Z score) that had values that were dissimilar to the rest of the features, as the regularization strategy we are going to use, requires features to be in a similar range.****We now split the data into training and testing data and in this case, we are going to use 80% of the data for training and the remaining for testing.**
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)
###Output
_____no_output_____
###Markdown
**We will use elastic net regularization to get the best of both L1 and L2 regularizations. We want feature selection (L1/LASSO) and we want to eliminate any errors due to feature correlations, if any (L2/Ridge).**
###Code
reg=SGDRegressor(loss="squared_loss", penalty="elasticnet")
reg.fit(X_train,y_train)
###Output
/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDRegressor'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
"and default tol will be 1e-3." % type(self), FutureWarning)
###Markdown
**In the above cell, we trained the model using the SGDRegressor and now it is time to predict using the test data.**
###Code
y_pred=reg.predict(X_test)
# The coefficients
#print('Coefficients: \n', reg.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(y_test.values, y_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y_test.values, y_pred))
plt.scatter(y_test.values, y_pred)
plt.xlabel("True Values")
plt.ylabel("Predictions")
plt.show()
###Output
_____no_output_____
###Markdown
**As we can see from the above cells, 89% of the variance (R^2 score) in the data has been captured and the model is predicting well. The trend line in the graph would be very close to the ideal 45 degree trend line (expected line if the model predicted with 100% accuracy).****However, we do not want to overfit our model because we want it to perform well on new untested data. To check if our model overfits, we can run a 6 fold cross validation and analyze the variance (R^2 scores) on each fold ,as shown below:**
###Code
# Perform 6-fold cross validation
kfold = KFold(n_splits=6)
scores = cross_val_score(reg, X, y, cv=kfold)
print 'Cross-validated scores:', scores
###Output
Cross-validated scores: [0.89341664 0.88913277 0.90557714 0.88741018 0.87579391 0.84723398]
###Markdown
Grid Search (Hyperparameter Tuning) (Optional and Computationally Expensive) **The following function performs a search over the entire parameter grid (as specified below) for the initial learning rate, and L1 ratio, and returns the optimal parameters, after an n fold cross validation.**
###Code
#https://medium.com/@aneesha/svm-parameter-tuning-in-scikit-learn-using-gridsearchcv-2413c02125a0
def SGD_param_selection(X, y, nfolds):
eta0s = [0.001, 0.01, 0.1]
l1_ratios = [0.15, 0.25, 0.35, 0.45, 0.55]
param_grid = {'eta0': eta0s, 'l1_ratio' : l1_ratios}
grid_search = GridSearchCV(SGDRegressor(loss="squared_loss", penalty="elasticnet"), param_grid, cv=nfolds)
grid_search.fit(X, y)
grid_search.best_params_
return grid_search.best_params_
SGD_param_selection(X_train,y_train,5)
###Output
_____no_output_____
###Markdown
**As we can see above, the grid search has yielded the optimal parameters for the initial learning rate and L1 ratio. This process took about 25 minutes to execute as it is very computationally expensive.****Let us now run the SGD with these parameters and check the accuracy.**
###Code
reg1=SGDRegressor(loss="squared_loss", penalty="elasticnet",eta0= 0.001,l1_ratio= 0.45)
reg1.fit(X_train,y_train)
y_pred1=reg1.predict(X_test)
# The coefficients
#print('Coefficients: \n', reg.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(y_test.values, y_pred1))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y_test.values, y_pred1))
###Output
Mean squared error: 163.81
Variance score: 0.89
|
10. Getting started with Data Analysis/4. Data Analysis - Numeric/Data Analysis - Numeric.ipynb | ###Markdown
Q1. Finding Average Rating
###Code
print("Average rating of these apps:",float(str(int(sum(data['Rating']))/len(data['Rating']))[:4]))
s = 0
for i in data['Rating']:
s += i
s = int(s)
print(s/len(data['Rating']))
float(str(int(sum(data['Rating']))/)[:4])
len(data['Rating'])
###Output
_____no_output_____
###Markdown
Q2. How many apps are there with rating 5?
###Code
c = 0
for i in data['Rating']:
if (i == 5.0):
c += 1
print("There are",c,'apps with rating 5')
###Output
There are 274 apps with rating 5
###Markdown
Q3. How many apps are there with rating between 4 - 4.5?
###Code
c = 0
for i in data['Rating']:
if (i >= 4.0 and i <= 4.5):
c += 1
print("There are",c,'apps with rating between 4 - 4.5')
###Output
There are 5446 apps with rating between 4 - 4.5
###Markdown
Q4. Average App Reviews
###Code
s = 0
for i in data['Reviews']:
s += int(i)
print(int(s/len(data['Reviews'])))
###Output
514376
|
notebooks/plotFigures7_8_9_10_v2.ipynb | ###Markdown
Plot Figures 7 (maps), 8 (boxplots), 9 (hour), 10 (GPS), Analyzing clusters through time and by featuresFor Sawi et al., 2021 Todo:* Add numbers to black dashed lines Todo::* Combine figure 11, features CUSTOM LEGEND
###Code
##CUSTOM LEGEND
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
legend_elements = [Line2D([0], [0], color='b', lw=4, label='Line'),
Line2D([0], [0], marker='o', color='w', label='Scatter',
markerfacecolor='g', markersize=15),
Patch(facecolor='orange', edgecolor='r',
label='Color Patch')]
# Create the figure
fig, ax = plt.subplots()
ax.legend(handles=legend_elements, loc='center')
plt.show()
import h5py
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from obspy import read
from matplotlib import cm
import matplotlib.gridspec as gridspec
import os
import datetime as dtt
import matplotlib.patches
import matplotlib.patches as mpatches
import matplotlib.dates as mdates
import datetime
from sklearn.preprocessing import StandardScaler
import sys
from matplotlib.patches import Rectangle
import sklearn.metrics
from scipy import spatial
import matplotlib.image as mpimg
import obspy
from scipy.signal import butter, lfilter
import librosa
from scipy.io import loadmat
from sklearn.decomposition import PCA
import scipy.io as spio
from sklearn.metrics import silhouette_samples
import seaborn as sns
import scipy as sp
import scipy.io as spio
import scipy.signal
from sklearn.metrics import confusion_matrix
import seaborn as sns
from sklearn.metrics import classification_report
from obspy.signal.cross_correlation import correlate, xcorr_max
from letter_subplots import letter_subplots
sys.path.append('.')
sys.path.append('../src/visualization/')
import paths
from sklearn.cluster import KMeans
# import figureFunctions
from functions2 import getFeatures, getLocationFeatures,getNMFOrder,resortByNMF,getSpectra_fromWF,getSgram
from functions2 import PCAonFP,calcSilhScore,getDailyTempDiff,getSpectraMedian,CalcDiffPeak,PVEofPCA,getTopFCat
from functions2 import catMergeFromH5, swapLabels, calcFFT, getWF, swapLabels,trimSpectra, KMeansSpectra, compileSpectraFromWF
import figureFunctions2
###Output
_____no_output_____
###Markdown
Define helper functions (move later) Set paths
###Code
#%% load project variables: names and paths
# key = sys.argv[1]
key = "BB_Gorner_Event_Final_v11_J8"
keyN = "BB_Gorner_Cont_Final_v10_J8"
filetype = '.gse2'
filetypeN = '.sac'
p = paths.returnp(key)
pN = paths.returnp(keyN)
#%%
projName = p['projName']
datasetID = p['datasetID']
projName = p['projName']
station = p['station']
channel = p['channel']
path_top = p['path_top']
path_proj = p['path_proj']
outfile_name = p['outfile_name']
dataFile_name = p['dataFile_name']
path_WF = p['path_WF']
path_Cat = p['path_Cat'] #original, raw catalog
subCatalog_Name = f"{dataFile_name}_Sgrams_Subcatalog.hdf5"
pathFP = f'{path_top}{projName}/03_output/{station}/SpecUFEx_output/step4_FEATout/'
pathACM = f'{path_top}{projName}/03_output/{station}/SpecUFEx_output/step2_NMF/'
pathSTM = f'{path_top}{projName}/03_output/{station}/SpecUFEx_output/step4_stateTransMats/'
pathEB = f'{path_top}{projName}/02_src/02_SpecUFEx/EB.mat'
pathElnB = f'{path_top}{projName}/02_src/02_SpecUFEx/ElnB.mat'
pathW = path_proj + '02_src/02_SpecUFEx/out.DictGain.mat'
# pathClusCat = path_proj + f"principalDf_full_{mode}_Kopt{Kopt}.csv"
dataH5_path = path_proj + dataFile_name
projNameN = pN['projName']
datasetIDN = pN['datasetID']
projNameN = pN['projName']
station = pN['station']
channel = pN['channel']
path_top = pN['path_top']
path_projN = pN['path_proj']
outfile_nameN = pN['outfile_name']
dataFile_nameN = pN['dataFile_name']
path_WFN = pN['path_WF']
path_CatN = pN['path_Cat'] #original, raw catalog
subCatalog_NameN = f"{dataFile_name}_Sgrams_Subcatalog.hdf5"
pathACMN = f'{path_top}{projNameN}/03_output/{station}/SpecUFEx_output/step2_NMF/'
pathSTMN = f'{path_top}{projNameN}/03_output/{station}/SpecUFEx_output/step4_stateTransMats/'
pathEBN = f'{path_top}{projNameN}/02_src/02_SpecUFEx/EB.mat'
pathElnBN = f'{path_top}{projNameN}/02_src/02_SpecUFEx/ElnB.mat'
pathWN = path_projN + '02_src/02_SpecUFEx/out.DictGain.mat'
# pathClusCatN = path_projN + f"principalDf_full_{mode}_Kopt{KoptN}.csv"
dataH5_pathN = path_projN + dataFile_nameN
pathFig = '../reports/figures/'
pathAuxData = '../data/processed/Garcia/'
###Output
_____no_output_____
###Markdown
Load auxiliary data
###Code
## Load auxiliary catalog
gps_station_list = ['24','34','36','37']
gps_df_list = []
for gst in gps_station_list:
gps_df = pd.read_csv(f'{pathAuxData}gps_roll_Slopecorrected_{gst}.csv',index_col=0)
gps_df['datetime'] = [pd.to_datetime(ii) for ii in gps_df.index]
gps_df['datetime_index'] = [pd.to_datetime(ii) for ii in gps_df.index]
gps_df = gps_df.set_index('datetime_index')
gps_df_list.append(gps_df)
lake_df = pd.read_csv(f'{pathAuxData}lake_df.csv',index_col=0)
lake_df['datetime'] = [pd.to_datetime(ii) for ii in lake_df.index]
lake_df['datetime_index'] = [pd.to_datetime(ii) for ii in lake_df.index]
lake_df = lake_df.set_index('datetime_index')
meteor_df = pd.read_csv(f'{pathAuxData}meteor_df.csv',index_col=0)
meteor_df['datetime'] = [pd.to_datetime(ii) for ii in meteor_df.index]
meteor_df['datetime_index'] = [pd.to_datetime(ii) for ii in meteor_df.index]
meteor_df = meteor_df.set_index('datetime_index')
rain_df = meteor_df.rain
len(gps_df)
###Output
_____no_output_____
###Markdown
Define some important times in study period
###Code
# timing of lake events
tstart = dtt.datetime(2007, 6, 13)
tend = dtt.datetime(2007, 7, 23)
calvet = dtt.datetime(2007, 7, 1,13,41,35)
supraDraint = dtt.datetime(2007, 7, 4)
subDraint = dtt.datetime(2007, 7, 7)
drainEndt = dtt.datetime(2007, 7, 15)
###Output
_____no_output_____
###Markdown
Load cluster catalogs
###Code
Kopt = 3
KoptN = 4
cat00 = pd.read_csv('../data/interim/icequakes_k{Kopt}.csv')
cat00N = pd.read_csv('../data/interim/noise_k{KoptN}.csv')
cat00['event_ID'] = [str(i) for i in cat00.event_ID]
## convert to datetime, set as index
cat00['datetime'] = [pd.to_datetime(i) for i in cat00.datetime]
cat00['datetime_index']= [pd.to_datetime(i) for i in cat00.datetime]
cat00 = cat00.set_index('datetime_index')
## convert to datetime, set as index
cat00N['event_ID'] = [str(i) for i in cat00N.event_ID]
cat00N['datetime'] = [pd.to_datetime(i) for i in cat00N.datetime]
cat00N['datetime_index']= [pd.to_datetime(i) for i in cat00N.datetime]
cat00N = cat00N.set_index('datetime_index')
cat00N.Cluster
###Output
_____no_output_____
###Markdown
Load station data
###Code
##station data
stn = pd.read_csv("../data/raw/stnlst.csv",
header=None,
names=['name','X','Y','Elevation','dX','dY','Depth'])
###Output
_____no_output_____
###Markdown
Get experiment parameters from H5 file
###Code
######### ######### ######### ######### ######### ######### ######### #########
####IQIQIQIQIQIQIQIQI
######### ######### ######### ######### ######### ######### ######### #########
with h5py.File(path_proj + dataFile_name,'r') as dataFile:
lenData = dataFile['processing_info/'].get('lenData')[()]
fs = dataFile['spec_parameters/'].get('fs')[()]
# fmin =
nperseg = dataFile['spec_parameters/'].get('nperseg')[()]
noverlap = dataFile['spec_parameters/'].get('noverlap')[()]
nfft = dataFile['spec_parameters/'].get('nfft')[()]
fmax = dataFile['spec_parameters/'].get('fmax')[()]
fmax = np.ceil(fmax)
fmin = dataFile['spec_parameters/'].get('fmin')[()]
fmin = np.floor(fmin)
fSTFT = dataFile['spec_parameters/'].get('fSTFT')[()]
tSTFT = dataFile['spec_parameters/'].get('tSTFT')[()]
sgram_mode = dataFile['spec_parameters/'].get('mode')[()].decode('utf-8')
scaling = dataFile['spec_parameters/'].get('scaling')[()].decode('utf-8')
fs = int(np.ceil(fs))
winLen_Sec = float(nperseg / fs)
######### ######### ######### ######### ######### ######### ######### #########
##### NOISENOISENOISENOISENOISE
######### ######### ######### ######### ######### ######### ######### #########
with h5py.File(path_projN + dataFile_nameN,'r') as dataFile:
lenDataN = dataFile['processing_info/'].get('lenData')[()]
fsN = dataFile['spec_parameters/'].get('fs')[()]
# fminN =
npersegN = dataFile['spec_parameters/'].get('nperseg')[()]
noverlapN = dataFile['spec_parameters/'].get('noverlap')[()]
nfftN = dataFile['spec_parameters/'].get('nfft')[()]
fmaxN = dataFile['spec_parameters/'].get('fmax')[()]
fmaxN = np.ceil(fmaxN)
fminN = dataFile['spec_parameters/'].get('fmin')[()]
fminN = np.floor(fminN)
fSTFTN = dataFile['spec_parameters/'].get('fSTFT')[()]
tSTFTN = dataFile['spec_parameters/'].get('tSTFT')[()]
sgram_modeN = dataFile['spec_parameters/'].get('mode')[()].decode('utf-8')
scalingN = dataFile['spec_parameters/'].get('scaling')[()].decode('utf-8')
fsN = int(np.ceil(fsN))
winLen_SecN = float(npersegN / fsN)
###Output
_____no_output_____
###Markdown
Load specufex output
###Code
######### ######### ######### ######### ######### ######### ######### #########
## specufex output - IQIQIQIQIQIQIQIQIQIQ
######### ######### ######### ######### ######### ######### ######### ######### Wmat = loadmat(pathW)
Wmat = loadmat(pathW)
EBmat = loadmat(pathEB)
W = Wmat.get('W1')
EB = EBmat.get('EB')
numPatterns = len(W[1])
Nfreqs = len(W)
numStates = EB.shape[0]
order_swap = getNMFOrder(W,numPatterns)
W_new = resortByNMF(W,order_swap)
EB_new = resortByNMF(EB,order_swap)
RMM = W_new @ EB_new.T
######### ######### ######### ######### ######### ######### ######### #########
## specufex output - NOISENOISENOINSENOISE
######### ######### ######### ######### ######### ######### ######### #########
WmatN = loadmat(pathWN)
EBmatN = loadmat(pathEBN)
WN = WmatN.get('W1')
EBN = EBmatN.get('EB')
numPatternsN = len(WN[1])
NfreqsN = len(WN)
numStatesN = EBN.shape[0]
order_swapN = getNMFOrder(WN,numPatternsN)
W_newN = resortByNMF(WN,order_swapN)
EB_newN = resortByNMF(EBN,order_swapN)
RMMN = W_newN @ EB_newN.T
###Output
_____no_output_____
###Markdown
Format day ticks, time plotting* Central European Time is 2 hours later than UTC (Coordinated Universal Time) * Max temp occurs around 16:00 (4pm) local time or, 14:00 (2pm) UTC* All times in UTCtodo: fix ::dummy variable -- just needed to get complete day set -- FIXFIXclus_clu_perday = cat0.event_ID.resample('D', label='left', closed='right').count()
###Code
############################################################
##### FORMAT DAY TICKS (ASSUMES NO DAYS SKIPPED?) ######
############################################################
tstart = pd.to_datetime('2007-06-14 00:00:00')
tend = pd.to_datetime('2007-07-22 00:00:00')
delta_day = 7
##dummy variable -- just needed to get complete day set -- FIXFIX
clus_clu_perday = cat00.event_ID.resample('D', label='left', closed='right').count()
numDays = len(clus_clu_perday)
days_list = [clus_clu_perday.index[i] for i in range(numDays)]
## these have lots of possible text formats
day_labels = [f"{days_list[d].month}-{days_list[d].date().day}" for d in range(0,len(days_list),delta_day)]
day_ticks = [days_list[d] for d in range(0,len(days_list),delta_day)]
# Central European Time is 2 hours later than UTC (Coordinated Universal Time)
##max temp is around 4pm local time or 16:00, in UTC it is 14:00 or 2pm
## start of
#all times in UTC
hour_of_approx_max_temp = 14
# hourMaxTemp = [dtt.datetime(2007, 6, 14,hour_of_approx_max_temp,0,0) + pd.DateOffset(i) for i in range(0,numDays)]
## ts 2021/08/07 : change line to start of day
dayStart = hourMaxTemp = [dtt.datetime(2007, 6, 14) + pd.DateOffset(i) for i in range(0,numDays)]
hour24labels = [str(r) for r in range(0,24)] #UTC
print(day_labels)
############################################################
############################################################
plt.rcParams['image.cmap']='magma'
plt.rcParams.update({'font.size': 8})
colors =cm.Paired(np.array([1,5,7,9,2,4,6,8]))
## when plotting, add a bit of buffer so bars aren't cut off
tlimstart = pd.to_datetime('2007-06-13 12:00:00')
tlimend = pd.to_datetime('2007-07-22 12:00:00')
lw1=4
lw2=5
alphaT=1
ylabfont=8
ylabpad =10
plt_kwargs = {'lw1':lw1,
'lw2':lw2,
'alphaT':alphaT,
'ylabfont':ylabfont,
'ylabpad':ylabpad,
'colors':colors,
'scaling':scaling,
'sgram_mode':sgram_mode,
'hour24labels':hour24labels,
'day_ticks':day_ticks,
'day_labels':day_labels,
'numDays':numDays,
'hourMaxTemp':hourMaxTemp,
'tstart':tlimstart, ## for extending x axis to fit bars
'tend':tlimend, ## for extending x axis to fit bars
'tstartreal':tstart,## actual study bound
'tendreal':tend, ## actual study bound
'supraDraint':supraDraint,
'subDraint':subDraint,
'drainEndt':drainEndt}
###Output
_____no_output_____
###Markdown
Specs for figures JGR
###Code
#quarter page
width1 = 3.74016
height1 = 4.52756
#full page
width2 = 7.48031
height2 = 9.05512
###Output
_____no_output_____
###Markdown
Figure 7 - Map of Icequakes
###Code
topF = 20
catRep = getTopFCat(cat00,topF)
##v3 separate map for each cluster
##settings for yellow bars
plotMap = 0
size1 = 1000
a1 = .7
k=3
plt.rcParams.update({'font.size': 12})
# if 'Event' in key:
fig,axes = plt.subplots(figsize = (width2,height1))#,sharex=True,constrained_layout=True)
gs = gridspec.GridSpec(1,1)
# gs.update(wspace=0.02, hspace=0.07)
ax = plt.subplot(gs[0])
ax.set_aspect('equal')
ax.tick_params(axis='x',labelrotation=45)
# cat00k = cat00[cat00.Cluster==k]
figureFunctions2.plotMap(cat00,
ax=ax,
colorBy='cluster',
size=3,
lw=1,
alpha=.3,
edgecolor='cluster',
**plt_kwargs); #'oneCluster''cluster';'all';'datetime'
figureFunctions2.plotMap(catRep,
ax=ax,
colorBy='cluster',
size=25,
marker='o',
lw=1,
alpha=.6,
edgecolor='None',
**plt_kwargs); #'oneCluster''cluster';'all';'datetime'
###% Stations
figureFunctions2.plotStations(stn,station,ax=ax)
#%% Legend
from matplotlib.lines import Line2D
ms2 = 6
n_list = [len(cat00[cat00.Cluster==k]) for k in range(1,Kopt+1)]
legend_elements = [Line2D([0], [0], marker='o', linestyle='None', color=colors[0], label=f'C1, n={n_list[0]}',markersize=ms2),
Line2D([0], [0], marker='o', linestyle='None', color=colors[1], label=f'C2, n={n_list[1]}',markersize=ms2),
Line2D([0], [0], marker='o', linestyle='None', color=colors[2], label=f'C3, n={n_list[2]}',markersize=ms2)]
# Create the figure
ax.legend(handles=legend_elements)#, loc='center')
#%% limits
buff=5
ax.set_xlim(cat00.X_m.min()-buff,cat00.X_m.max()+buff)
ax.set_ylim(cat00.Y_m.min()-buff,cat00.Y_m.max()+buff)
plt.savefig(pathFig + f'Figure_7.png', dpi=300, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Figure 8 Get Features for Rep Events
###Code
catRepN = getTopFCat(cat00N,topF)
gF = 1
if gF:
print('getting features for events...')
df = getFeatures(catRep,dataH5_path,station,channel,fmin,fmax,fs,nfft)
print('getting features for noise...')
dfN = getFeatures(catRepN,dataH5_pathN,station,channel,fminN,fmaxN,fsN,nfftN)
# df = getFeatures(catRep,filetype,fmin,fmax,fs,path_WF,nfft,dataH5_path,station,channel)
# print('getting features for noise...')
# dfN = getFeatures(catRepN,filetypeN,fminN,fmaxN,fsN,path_WFN,nfftN,dataH5_pathN,station,channel)
print('done!')
# get location features
print('getting location features for events...')
df_loc = getLocationFeatures(catRep,stn,station)
###Output
getting features for events...
###Markdown
Figure 8 - Boxplots plot feature boxplots
###Code
plt.rcParams.update({'font.size': 8})
fig,axes = plt.subplots(figsize = (width1,height2))#,sharex=True,constrained_layout=True)
gs = gridspec.GridSpec(4,7)
gs.update(wspace=4, hspace=0.1)
FS = 13 #'Cluster' x label
tfont = 14
tpad = 6
title = 'Icequakes'
titleN = 'Noise'
textYN = 8
textY = 7.8
# ### ### ### ### ### ### ### ### ### ### ### ###
# ### ### ### ### ### ### ### ### ### ### ### ###
# ####### LOCATION LOCATION LOCATION
# ### ### ### ### ### ### ### ### ### ### ### ###
# ### ### ### ### ### ### ### ### ### ### ### ###
## plot 3D dist boxplot
ax = plt.subplot(gs[0,0:3])
ax.set_title(title,fontsize=tfont,pad=tpad)
figureFunctions2.plotFeatureBoxPlot(df_loc,Kopt,'DistXYZ_m',ax=ax,**plt_kwargs)
ax.set_ylabel('Station distance (m)',labelpad=10)
ax.set_xlabel('')
ax.set_xticks([])
ax.set_xticklabels('')
## plot full Depth boxplot
ax = plt.subplot(gs[1,0:3])
figureFunctions2.plotFeatureBoxPlot(df_loc,Kopt,'Depth_m',ax=ax,**plt_kwargs)
ax.invert_yaxis()
ax.set_ylabel('')
ax.set_xticks([])
ax.set_xticklabels('')
ax.set_xlabel('')
ax.set_ylabel('Depth (m)',labelpad=10)
### ### ### ### ### ### ### ### ### ### ### ###
### ### ### ### ### ### ### ### ### ### ### ###
## plot boxplot for RSAM
ax = plt.subplot(gs[2,0:3])
figureFunctions2.plotFeatureBoxPlot(df,Kopt,'log10RSAM',ax=ax,**plt_kwargs)
ax.set_ylabel('log10(RSAM) ($m/s^2$)',labelpad=5)
ax.set_xticks([])
ax.set_xticklabels('')
ax.set_xlabel('')
# ax.grid('off')
### ### ### ### ### ### ### ### ### ### ### ###
## plot Boxplot for SC
ax = plt.subplot(gs[3,0:3])
figureFunctions2.plotFeatureBoxPlot(df,Kopt,'SC',ax=ax,**plt_kwargs)
# ax.set_xlabel('Cluster',labelpad=4,fontsize=FS)
ax.set_ylabel('Spectral centroid ($Hz$)',labelpad=5)
# ax.set_xticks([])
# ax.set_xticklabels('')
# ax.set_xlabel('')
# ax.grid('off')
### ### ### ### ### ### ### ### ### ### ### ###
### ### ### ### ### ### ### ### ### ### ### ###
####### NOISENOISENOISE
### ### ### ### ### ### ### ### ### ### ### ###
### ### ### ### ### ### ### ### ### ### ### ###
### ### ### ### ### ### ### ### ### ### ### ###
### ### ### ### ### ### ### ### ### ### ### ###
## plot boxplot for RSAM NOISE
ax = plt.subplot(gs[2,3:])
ax.set_title(titleN,fontsize=tfont,pad=tpad)
figureFunctions2.plotFeatureBoxPlot(dfN,KoptN,'log10RSAM',ax=ax,**plt_kwargs)
# ax.set_ylabel('log10(RSAM)',labelpad=12)
ax.set_ylabel('')
### ### ### ### ### ### ### ### ### ### ### ###
## plot Boxplot for SC NOISE
ax = plt.subplot(gs[3,3:])
figureFunctions2.plotFeatureBoxPlot(dfN,KoptN,'SC',ax=ax,**plt_kwargs)
# ax.set_ylabel('Spectral centroid (Hz)',labelpad=4)
ax.set_ylabel('')
plt.tight_layout()
plt.savefig(pathFig + f'Figure_8.png', dpi=300, bbox_inches='tight')
###Output
/Users/theresasawi/opt/anaconda3/envs/seismo2/lib/python3.7/site-packages/ipykernel_launcher.py:112: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
###Markdown
Figure 9 - Hourly clusters Stack hourly
###Code
plt.rcParams.update({'font.size': 12})
title = 'Icequakes'
titleN = 'Noise'
dailyTempDiff = getDailyTempDiff(meteor_df,**plt_kwargs)
tfont = 14
tpad = 16
fig,axes = plt.subplots(figsize = (width1,height1))#,sharex=True)#,constrained_layout=True)
plt.suptitle('Icequakes Noise ',fontsize=tfont,y=.94)
gs = gridspec.GridSpec(KoptN+1, 2)
gs.update(wspace=.5, hspace=.8)
### ICEQUAKE PROPORTION HOURLY
ax = plt.subplot(gs[0,0])
figureFunctions2.plotHourBarStack(cat00,Kopt,dailyTempDiff,ax=ax,labelpad=10,label='none',colorBy='cluster',k=k,**plt_kwargs)
ax.set_ylabel('Proportion of \n observations ',labelpad=8)
ax.set_xlabel(' Hour of Day (UTC)',labelpad=8)
ax.set_xlim(-.5,23.5)
ax.set_ylim(0,1)
# ### NOISE PROPORTION HOURLY
axN = plt.subplot(gs[0,1])
figureFunctions2.plotHourBarStack(cat00N,KoptN,dailyTempDiff,ax=axN,labelpad=8,label='right',**plt_kwargs)
axN.set_ylabel('')
axN.set_xlabel('')
axN.set_xlim(-.5,23.5)
axN.set_ylim(0,1)
plt.savefig(pathFig + f'Figure_9_stack.png', dpi=300, bbox_inches='tight')
###Output
../src/visualization/functions2.py:350: FutureWarning: 'loffset' in .resample() and in Grouper() is deprecated.
>>> df.resample(freq="3s", loffset="8H")
becomes:
>>> from pandas.tseries.frequencies import to_offset
>>> df = df.resample(freq="3s").mean()
>>> df.index = df.index.to_timestamp() + to_offset("8H")
temp_H = meteor_df1.temp.resample('H',loffset='30T').mean().ffill()
../src/visualization/functions2.py:351: FutureWarning: 'loffset' in .resample() and in Grouper() is deprecated.
>>> df.resample(freq="3s", loffset="8H")
becomes:
>>> from pandas.tseries.frequencies import to_offset
>>> df = df.resample(freq="3s").mean()
>>> df.index = df.index.to_timestamp() + to_offset("8H")
temp_D = meteor_df1.temp.resample('D',loffset='12H').mean().ffill()
###Markdown
Figure 10 - Clusters and GPS displacement over season
###Code
!pwd
from matplotlib.ticker import FormatStrFormatter
plt.rcParams.update({'font.size': 10})
fig,axes = plt.subplots(figsize = (width1,height2))#,sharex=True)#,constrained_layout=True)
gs = gridspec.GridSpec(KoptN+Kopt+1,1)
gs.update(wspace=.6, hspace=.15)
tpad = 6
gpsstations = [24,34,36,37]
timecode = '3H'
datatype = ''
title = 'Icequakes'
textY = 16
ymax = 8.7
texty = 15
for k in range(1,Kopt+1):
ax=plt.subplot(gs[k-1,0])
figureFunctions2.plotBarCluster(cat00,k=k,barWidth=.3,timeBin='H',ax=ax,**plt_kwargs)
if k == 1:
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel='right',ax=ax,**plt_kwargs)
##title
ax.text(x=datetime.datetime(2007,6,14),y= ymax+.2, s='Icequakes',color='k',size=texty)
ax.text(x=supraDraint,y= ymax, s='1.',color='fuchsia',size=texty)
ax.text(x=subDraint,y= ymax,s='2.',color='fuchsia',size=texty)
ax.text(x=drainEndt,y= ymax,s='3.',color='fuchsia',size=texty)
for gps_dff in gps_df_list:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
elif k==2:
ax.set_ylabel('Observations per hour',labelpad=6)
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel='none',ax=ax,**plt_kwargs)
for gps_dff in gps_df_list:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
elif k==3:
for e, gps_dff in enumerate(gps_df_list):
if e==0:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='right',size=.1,ax=ax,**plt_kwargs)
else:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel='none',ax=ax,**plt_kwargs)
else:
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel='none',ax=ax,**plt_kwargs)
for gps_dff in gps_df_list:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
## lol 3 for alpha
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
ax.axvline(x=subDraint,color='fuchsia',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=supraDraint,color='fuchsia',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=drainEndt,color='fuchsia',ls='--',linewidth=2)
if k == Kopt:
ax.set_xlabel('Date, 2007 (month-day)')
ax.tick_params(axis='x',labelrotation=0)
else:
ax.set_xlabel('')
ax.set_xticklabels('')
ax=plt.subplot(gs[Kopt,0])
ax.axis('off')
titleN = 'Noise'
textYN = 17
# ymaxN = 18
ymaxN = 13
for k in range(1,KoptN+1):
ax=plt.subplot(gs[k-1+Kopt+1,0])
# ax.set_ylim(ymin=0)
figureFunctions2.plotBarCluster(cat00N,k=k,barWidth=.3,ax=ax,**plt_kwargs)
## lol 3 for alpha
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
# figureFunctions2.plotTemp(meteor_df.temp,ax=ax,labels='off',**plt_kwargs)
ax.axvline(x=subDraint,color='fuchsia',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=supraDraint,color='fuchsia',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=drainEndt,color='fuchsia',ls='--',linewidth=2)
if k == 1:
ax.set_xlabel('')
ax.set_xticklabels('')
# ax.set_title(titleN,fontsize=tfont,pad=tpad)
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel='right',ax=ax,**plt_kwargs)
###TITLE
ax.text(x=datetime.datetime(2007,6,14),y= ymaxN, s='Noise',color='k',size=texty)
ax.text(x=supraDraint,y= ymaxN, s='1.',color='fuchsia',size=texty)
ax.text(x=subDraint, y= ymaxN,s='2.',color='fuchsia',size=texty)
ax.text(x=drainEndt, y= ymaxN,s='3.',color='fuchsia',size=texty)
if k == 3:
ax.set_ylabel('Observations per hour',labelpad=6)
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel=None,ax=ax,**plt_kwargs)
for e, gps_dff in enumerate(gps_df_list):
if e==0:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='right',size=.1,ax=ax,**plt_kwargs)
else:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
else:
figureFunctions2.plotLake(lake_df,rain_df,legend=None,ylabel=None,ax=ax,**plt_kwargs)
for gps_dff in gps_df_list:
figureFunctions2.plotGPS(gps_dff.gps_roll,ylabel='none',size=.1,ax=ax,**plt_kwargs)
if k == KoptN:
ax.set_xlabel('Date, 2007 (month-day)')
ax.tick_params(axis='x',labelrotation=0)
else:
ax.set_xlabel('')
ax.set_xticklabels('')
ax.set_ylim(bottom=.5)
plt.savefig(pathFig + f'clusterBarplot.png', dpi=300, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
PLaying with heat map -- need to interpolate time series Pad earlier and later dates as list
###Code
i=1
timeBin='3H'
barWidth=.1
barHeightsPadList = []
for k in range(1,Kopt+1):
clus_events = cat00[cat00.Cluster == k]
barHeights = clus_events.resample(timeBin).event_ID.count()
startFill = pd.date_range(start=cat00.index[0], end=barHeights.index[0], freq='3H')
endFill = pd.date_range(start=barHeights.index[-1], end=cat00.index[-1], freq='3H')
startFillZ = np.zeros(len(startFill),dtype='int64')
endFillZ = np.zeros(len(endFill),dtype='int64')
barHeightsPad = np.hstack([startFillZ,barHeights])
barHeightsPad = np.hstack([barHeightsPad,endFillZ])
barHeightsPadList.append(barHeightsPad)
barHeights_ar = np.array(barHeightsPadList)
###Output
_____no_output_____
###Markdown
Plot heat map
###Code
plt.figure(figsize=(width2,height1))
ax = plt.gca()
sns.heatmap(barHeights_ar,
cmap=cm.Greys,
vmin = 0, #Values to anchor the colormap, otherwise they are inferred from the data and other keyword arguments.
vmax = np.max(barHeightsPadList),
square=False,
ax=ax)
ax.set_ylabel('Observations \n per hour',labelpad=4)
# ax.set_ylim(0,40)
# ax.set_xlim(clus_events_perday.index.min(),clus_events_perday.index.max())
# ax.set_xticks([])
ax.set_yticks([0.5,1.5,2.5])
ax.set_yticklabels(['1','2','3'])
# ax.set_xticklabels(day_labels)
ax.set_ylabel('Cluster')
ax.set_xlabel('time')
ax.axvline(x=0,c='k',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=supraDraint,c='k',linestyle='--',linewidth=2, alpha=1)
ax.axvline(x=drainEndt,color='k',ls='--',linewidth=2)
# ax.set_xlim(tstart,tend)
###Output
_____no_output_____
###Markdown
Same for Noise
###Code
i=1
timeBin='3H'
barWidth=.1
barHeightsPadListN = []
for k in range(1,KoptN+1):
clus_events = cat00N[cat00N.Cluster == k]
barHeightsN = clus_events.resample(timeBin).event_ID.count()
startFill = pd.date_range(start=cat00N.index[0], end=barHeightsN.index[0], freq='3H')
endFill = pd.date_range(start=barHeightsN.index[-1], end=cat00N.index[-1], freq='3H')
startFillZ = np.zeros(len(startFill),dtype='int64')
endFillZ = np.zeros(len(endFill),dtype='int64')
barHeightsPadN = np.hstack([startFillZ,barHeightsN])
barHeightsPadN = np.hstack([barHeightsPadN,endFillZ])
barHeightsPadListN.append(barHeightsPadN)
barHeights_arN = np.array(barHeightsPadListN)
ax.get_xticks()
plt.figure(figsize=(width2,height1))
ax = plt.gca()
p1 = sns.heatmap(barHeights_arN,
cmap=cm.Greys,
vmin = 0, #Values to anchor the colormap, otherwise they are inferred from the data and other keyword arguments.
vmax = np.max(barHeightsPadListN),
square=False,
ax=ax)
p1.set_xticklabels(ax.get_xticklabels(), rotation=45)
ax.set_ylabel('Observations \n per hour',labelpad=4)
# ax.set_ylim(0,40)
# ax.set_xlim(clus_events_perday.index.min(),clus_events_perday.index.max())
# ax.set_xticks([])
ax.set_yticks([0.5,1.5,2.5,3.5])
ax.set_yticklabels(['1','2','3','4'])
# ax.set_xticklabels(day_labels)
ax.set_ylabel('Cluster')
ax.set_xlabel('time')
# ax.set_xlim(tstart,tend)
###Output
_____no_output_____ |
Exercicios8.ipynb | ###Markdown
Exercícios 8 Considerando o dicionário com o nome dos alunos e suas respectivas notas abaixo, crie uma estrutura de repetição para percorrer cada elemento do dicionário para gravar cada aluno em um novo arquivo de texto– Cada aluno deve ocupar uma linha do novo arquivo de texto– O formato deve ser: nome,nota (Pedro,8.0)– Após a criação do arquivo de texto, faça a leitura do arquivo e mostre todos os alunos
###Code
alunos = {'Pedro': 8.0, 'Maria': 10.0, 'Amilton': 7.5}
alunos.items()
with open('texto.txt', 'w') as texto:
for nome, nota in alunos.items():
texto.write(f'{nome}, {nota}\n')
with open('texto.txt', 'r') as tex:
for linha in tex:
print(linha)
###Output
Pedro, 8.0
Maria, 10.0
Amilton, 7.5
|
notebooks/deploy_model.ipynb | ###Markdown
Deploy
###Code
from azureml.core.webservice import AciWebservice
from azureml.core.model import InferenceConfig
from azureml.core import Environment, Model
env = Environment.from_pip_requirements('image_resto_env', os.path.join(experiment_folder, 'requirements.txt'))
# Set path for scoring script
script_file = os.path.join(experiment_folder, "src", "deploy", "score.py")
# Configure the scoring environment
inference_config = InferenceConfig(entry_script=script_file,
environment=env)
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service_name = "image-reconstruction-service"
if service_name in ws.webservices:
ws.webservices[service_name].delete()
service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
service.wait_for_deployment(True)
print(service.state)
print(service.get_logs())
import json
import torch
import matplotlib.pyplot as plt
x_new = torch.rand(1, 224, 224)*255
# Convert the array to a serializable list in a JSON document
input_json = json.dumps({"data": x_new.tolist()})
# Call the web service, passing the input data (the web service will also accept the data in binary format)
reconstruction = service.run(input_data = input_json)
# Get the predicted class - it'll be the first (and only) one.
reconstruction = json.loads(reconstruction)
%matplotlib inline
plt.imshow(reconstruction*255)
import numpy as np
def get_rbg_from_lab(gray_imgs, ab_imgs, n = 10):
# create an empty array to store images
imgs = np.zeros((n, 224, 224, 3))
imgs[:, :, :, 0] = gray_imgs[0:n:]
imgs[:, :, :, 1:] = ab_imgs[0:n:]
# convert all the images to type unit8
imgs = imgs.astype("uint8")
# create a new empty array
imgs_ = []
for i in range(0, n):
imgs_.append(cv2.cvtColor(imgs[i], cv2.COLOR_LAB2RGB))
# convert the image matrix into a numpy array
imgs_ = np.array(imgs_)
return imgs_
import cv2
img = get_rbg_from_lab((x_new).view(1, 224, 224),
(torch.tensor(reconstruction[0])*255).view(1, 224, 224, 2),
n=1)
plt.imshow(img.squeeze())
###Output
_____no_output_____
###Markdown
Deploy Detection Model
This notebook provides a basic introduction to deploying a trained model as either an ACI or AKS webservice with AML leveraging the azure_utils and tfod_utils packages in this repo.
Before executing the code please ensure you have a completed experiement with a trained model using either the scripts in src or the train model notebooks.
Note that this notebook makes use of additional files in this repo:
- utils - contains mapping functions for the classes that are needed for the deployment image
- conda_env.yml - contains the environment and packages required
- score.py - the base scoring file that is used to created the model_score.py used int he dpeloyment
###Code
import os
import sys
from azure_utils.azure import load_config
from azure_utils.deployment import AMLDeploy
###Output
_____no_output_____
###Markdown
1. Define Run Paramters
###Code
# Run params
ENV_CONFIG_FILE = "dev_config.json"
EXPERIMENT = "pothole"
RUN_ID = "pothole_1629819580_7ce6a2e8"
IMAGE_TYPE = "testpotholeservice"
COMPUTE_TARGET_NAME = "testdeployment"
MODEL_NAME = "testpotholeservice"
WEBSERVICE_NAME = MODEL_NAME.lower().replace("_", '')
###Output
_____no_output_____
###Markdown
2. Initialise Deployment Class
###Code
deployment = AMLDeploy(RUN_ID,
EXPERIMENT,
WEBSERVICE_NAME,
MODEL_NAME,
IMAGE_TYPE,
config_file=ENV_CONFIG_FILE)
###Output
_____no_output_____
###Markdown
3. Register Model from Experiment
###Code
model = deployment.register_run_model()
###Output
_____no_output_____
###Markdown
4. Set Scoring Script
The base score file is avliable in the src dir, variation can be created as needed. At deployment model name will be updated to create the final deploy script.
We also set the src dir to the deployment src folder so that at deployment we can access the utils
###Code
src_dir = os.path.join('..', 'src', 'deployment')
score_file = os.path.join(src_dir, 'score_tf2.py')
env_file = './conda_env_tf2.yml'
###Output
_____no_output_____
###Markdown
5. Create Inference Config
###Code
inference_config = deployment.create_inference_config(score_file, src_dir, env_file)
###Output
_____no_output_____
###Markdown
6. Check is a webservice exists with same name
Checks if there is a webservice with the same name. If it returns true you can either skip the next two cells and update that service or change the service name to deploy a new webservice.
###Code
deployment.webservice_exists(deployment.webservice_name)
###Output
_____no_output_____
###Markdown
6. Deploy ACI
Deploy the model to an ACI endpoint, this will be targeting a CPU instance and not GPU and is used just for testing purposes.
###Code
target, config = deployment.create_aci()
deployment.deploy_new_webservice(model,
inference_config,
config,
target)
###Output
_____no_output_____
###Markdown
7. Deploy AKS
###Code
deployment.webservice_name = deployment.webservice_name + "-aks"
target, config = deployment.create_aks(COMPUTE_TARGET_NAME, exists=False)
deployment.deploy_new_webservice(model,
inference_config,
config,
target)
###Output
_____no_output_____
###Markdown
8. Update Existing Webservice
###Code
deployment.update_existing_webservice(model, inference_config)
###Output
_____no_output_____
###Markdown
Deploying the Flight Delay ModelIn this notebook, we deploy the model we trained to predict flight delays, using [Kubeflow Serving](https://www.kubeflow.org/docs/components/serving/kfserving/).**Note** this notebook requires access to a KFServing installation. See the [KFServing instructions](../kfserving.md) for details. If running the pipeline on the Kubeflow Pipelines runtime, also see the [readme instructions](../README.md) for the link to install KFP. Import required modulesImport and configure the required modules.
###Code
! pip install -q kfserving
import os
import numpy as np
import requests
# minio is part of kfserving
from minio import Minio
from minio.error import NoSuchBucket
###Output
_____no_output_____
###Markdown
Upload the model to object storageOur notebook has access to the trained model file, which was exported by the previous pipeline phase. _However_, when using a Kubeflow Pipelines runtime, it is not possible to programatically access the object storage bucket. It also makes execution mechanics different between local and KFP execution mode.So, here we will use a dedicated bucket for models in object storage, and upload it from the notebook execution environment. We will then deploy the KFServing inference service using that object storage location.
###Code
# set up the minio client to access object storage buckets
os_url = os.environ.get('OS_URL', 'minio-service:9000')
access_key = os.environ.get('ACCESS_KEY_ID', 'minio')
secret_key = os.environ.get('SECRET_ACCESS_KEY', 'minio123')
mc = Minio(os_url,
access_key=access_key,
secret_key=secret_key,
secure=False)
print('Current buckets:')
for b in mc.list_buckets():
print(' ' + b.name)
# create a bucket to upload the model file to
# Note: if the model file already exists we delete it
model_bucket = os.environ.get('MODEL_BUCKET', 'models')
model_dir = os.environ.get('MODEL_DIR', 'models')
model_file = 'model.joblib'
model_path = '{}/{}'.format(model_dir, model_file)
try:
# delete model file if if exists
mc.remove_object(model_bucket, model_file)
except NoSuchBucket:
# the bucket doesn't exist - create it
print('Creating bucket [{}]'.format(model_bucket))
mc.make_bucket(model_bucket)
# upload the model file
file_stat = os.stat(model_path)
with open(model_path, 'rb') as data:
mc.put_object(model_bucket, model_file, data, file_stat.st_size)
# check whether the model file is there
for o in mc.list_objects(model_bucket, prefix=model_file):
print(o)
###Output
_____no_output_____
###Markdown
Create the inference serviceNext, we use the KFServing Python client to create the inference service.**Note** the prerequisites (see the [KF Serving instructions](../kfserving.md)):1. A service account and related secret for the object storage service1. Specify the custom `sklearnserver` Docker image1. Patch the KFP `pipeline-runner` service account role to allow creating a KFServing `inferenceservice`
###Code
from kubernetes import client
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1alpha2EndpointSpec
from kfserving import V1alpha2PredictorSpec
from kfserving import V1alpha2SKLearnSpec
from kfserving import V1alpha2InferenceServiceSpec
from kfserving import V1alpha2InferenceService
from kubernetes.client import V1ResourceRequirements
KFServing = KFServingClient()
# we need to use the 'kubeflow' namespace so that the KFP runner can create the inference service
namespace = 'kubeflow'
# this is the service account created for S3 access credentials
service_acc = 'kfserving-sa'
model_storage_uri = 's3://{}'.format(model_bucket)
model_name = 'flight-model'
api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION
default_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
sklearn=V1alpha2SKLearnSpec(
storage_uri=model_storage_uri,
resources=V1ResourceRequirements(
requests={'cpu':'100m','memory':'1Gi'},
limits={'cpu':'100m', 'memory':'1Gi'}
)
),
service_account_name=service_acc
)
)
isvc = V1alpha2InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name=model_name, namespace=namespace),
spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec))
KFServing.create(isvc)
# Wait for the inference service to be ready
KFServing.get(model_name, namespace=namespace, watch=True, timeout_seconds=120)
###Output
_____no_output_____
###Markdown
Test the inference serviceOnce the inference service is running and available, we can send some test data to the service.**Note** that when deployed into KFP, we need to use the cluster-local url for the model. When executing locally, we assume that port-forwarding is enabled to allow access to the ingress gateway.
###Code
service = KFServing.get(model_name, namespace=namespace)
# load the 10 example rows from our test data, and display a few rows
examples = np.load('data/test_rows.npy')
examples[:3]
model_mode = os.environ.get('MODEL_MODE', 'local')
model_data = {"instances": examples.tolist()}
if model_mode == 'local':
# executing locally, use the ingress gateway (we assume port-forwarding)
url = f'http://localhost:8080/v1/models/{model_name}:predict'
service_hostname = '{}.{}.example.com'.format(model_name, namespace)
headers = {'Host': service_hostname}
resp = requests.post(url=url, json=model_data, headers=headers)
else:
# we are executing in KFP, use the cluster-local address
url = service['status']['address']['url']
resp = requests.post(url=url, json=model_data)
resp.json()
###Output
_____no_output_____
###Markdown
Delete the model serviceOnce we are done, we clean up the service.
###Code
KFServing.delete(model_name, namespace=namespace)
###Output
_____no_output_____
###Markdown
Deploy Model to Run on Region of InterestNote: Requires Descartes Labs access
###Code
%load_ext autoreload
%autoreload 2
import os
import sys
import descarteslabs as dl
import geopandas as gpd
from tensorflow.keras.models import load_model
from tensorflow import keras
from tqdm.notebook import tqdm
parent_dir = os.path.split(os.getcwd())[0]
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
from scripts import deploy_nn_v1
# User inputs
roi = 'test_region'
roi_file = f'../data/boundaries/{roi}.geojson'
patch_model_name = '44px_v2.8_2021-11-11'
patch_model_version = '44px_v2.8'
patch_model_file = '../models/' + patch_model_name + '.h5'
patch_model = load_model(patch_model_file, custom_objects={'LeakyReLU': keras.layers.LeakyReLU,
'ELU': keras.layers.ELU,
'ReLU': keras.layers.ReLU})
patch_stride = 14
patch_input_shape = patch_model.input_shape[1]
# Note on dates: The date range should be longer than the spectrogram length.
# Starting on successive mosaic periods (typically: monthly), as many
# spectrograms are created as fit in the date range.
start_date = '2020-01-01'
end_date = '2021-02-01'
mosaic_period = 4
mosaic_method = 'median'
patch_product_id = f'earthrise:mining_{roi}_v{patch_model_version}_{start_date}_{end_date}_period_{mosaic_period}_method_{mosaic_method}'
product_name = patch_product_id.split(':')[-1] # Arbitrary string - optionally set this to something more human readable.
run_local = False # If False, the model prediction tasks are async queued and sent to DL for processing.
# If running locally, get results faster by setting smalle tilesize (100?)
# If running on Descartes, use tilesize 900
if run_local:
tilesize = 100
else:
tilesize = 900
padding = patch_input_shape - patch_stride
args = [
'--roi_file',
roi_file,
'--patch_product_id',
patch_product_id,
'--product_name',
product_name,
'--patch_model_name',
patch_model_name,
'--patch_model_file',
patch_model_file,
'--patch_stride',
str(patch_stride),
'--mosaic_period',
str(mosaic_period),
'--mosaic_method',
mosaic_method,
'--start_date',
start_date,
'--end_date',
end_date,
'--pad',
str(padding),
'--tilesize',
str((tilesize // patch_input_shape) * patch_input_shape - padding)
]
if run_local:
args.append('--run_local')
###Output
_____no_output_____
###Markdown
Launch Descartes job. Monitor at https://monitor.descarteslabs.com/
###Code
# Because of the way DL uploads modules when queuing async tasks, we need to launch from the scripts/ folder
%cd ../scripts
%pwd
# Check if patch feature collection exists. If it does, delete the FC
fc_ids = [fc.id for fc in dl.vectors.FeatureCollection.list() if patch_product_id in fc.id]
if len(fc_ids) > 0:
fc_id = fc_ids[0]
print("Existing product found.\nDeleting", fc_id)
dl.vectors.FeatureCollection(fc_id).delete()
else:
print("No existing product found.\nCreating", patch_product_id)
deploy_nn_v1.main(args)
###Output
Split ROI into 25 tiles
Model 44px_v2.8_2021-11-11 found in DLStorage.
Creating product earthrise:mining_test_region_v44px_v2.8_2020-01-01_2021-02-01_period_4_method_median_patches
###Markdown
Download Data Download Patch Classifier Feature Collection
###Code
print("Downloading", patch_product_id)
fc_id = [fc.id for fc in dl.vectors.FeatureCollection.list() if patch_product_id in fc.id][0]
fc = dl.vectors.FeatureCollection(fc_id)
region = gpd.read_file(roi_file)['geometry']
features = []
for elem in tqdm(fc.filter(region).features()):
features.append(elem.geojson)
results = gpd.GeoDataFrame.from_features(features)
if len(results) == 0:
print("No results found for", product_name)
else:
basepath = os.path.join('../data/outputs/', patch_model_version)
print("Saving to", basepath)
if not os.path.exists(basepath):
os.makedirs(basepath)
results.to_file(f"{basepath}/{product_name}.geojson", driver='GeoJSON')
print(len(features), 'features found')
###Output
Downloading earthrise:mining_test_region_v44px_v2.8_2020-01-01_2021-02-01_period_4_method_median
###Markdown
Batched RunDeploy model on a folder of boundary files rather than a single ROI Define parameters that are consistent across regions
###Code
patch_model_name = '44px_v2.8_2021-11-11'
patch_model_version = '44px_v2.8'
patch_model_file = '../models/' + patch_model_name + '.h5'
patch_model = load_model(patch_model_file, custom_objects={'LeakyReLU': keras.layers.LeakyReLU,
'ELU': keras.layers.ELU,
'ReLU': keras.layers.ReLU})
patch_stride = 14
patch_input_shape = patch_model.input_shape[1]
# Note on dates: The date range should be longer than the spectrogram length.
# Starting on successive mosaic periods (typically: monthly), as many
# spectrograms are created as fit in the date range.
start_date = '2020-01-01'
end_date = '2021-02-01'
mosaic_period = 4
mosaic_method = 'median'
run_local = False # If False, the model prediction tasks are async queued and sent to DL for processing.
###Output
_____no_output_____
###Markdown
Load folder of boundary files
###Code
boundary_folder = '../data/boundaries/amazon_basin'
region_list = [f.split('.')[0] for f in os.listdir(boundary_folder)]
region_list
###Output
_____no_output_____
###Markdown
Deploy model on regionThis process will take some time to deploy if the regions of interest are large
###Code
for roi in sorted(region_list):
roi_file = os.path.join(boundary_folder, roi + '.geojson')
patch_product_id = f'earthrise:mining_{roi}_v{patch_model_version}_{start_date}_{end_date}_period_{mosaic_period}_method_{mosaic_method}'
product_name = patch_product_id.split(':')[-1] # Arbitrary string - optionally set this to something more human readable.
tilesize = 900
# Generally, leave padding at 0
padding = patch_input_shape - patch_stride
args = [
'--roi_file',
roi_file,
'--patch_product_id',
patch_product_id,
'--product_name',
product_name,
'--patch_model_name',
patch_model_name,
'--patch_model_file',
patch_model_file,
'--patch_stride',
str(patch_stride),
'--mosaic_period',
str(mosaic_period),
'--mosaic_method',
mosaic_method,
'--start_date',
start_date,
'--end_date',
end_date,
'--pad',
str(padding),
'--tilesize',
str((tilesize // patch_input_shape) * patch_input_shape - padding)
]
# Because of the way DL uploads modules when queuing async tasks, we need to launch from the scripts/ folder
%cd ../scripts
%pwd
# Check if patch feature collection exists. If it does, delete the FC
fc_ids = [fc.id for fc in dl.vectors.FeatureCollection.list() if patch_product_id in fc.id]
if len(fc_ids) > 0:
fc_id = fc_ids[0]
print("Existing product found.\nDeleting", fc_id)
dl.vectors.FeatureCollection(fc_id).delete()
print("Deploying", roi)
deploy_nn_v1.main(args)
###Output
_____no_output_____
###Markdown
Bulk DownloadDownload outputs after the model runs have completed. Note, the runs must be complete, as seen on [monitor.descarteslabs.com](monitor.descarteslabs.com), not just deployed, as seen in the previous cell.
###Code
# Patch classifier product download
for roi in sorted(region_list):
roi_file = f'../data/boundaries/amazon_basin/{roi}.geojson'
patch_product_id = f'earthrise:mining_{roi}_v{patch_model_version}_{start_date}_{end_date}_period_{mosaic_period}_method_{mosaic_method}'
product_name = patch_product_id.split(':')[-1]
print("Downloading", patch_product_id)
fc_id = [fc.id for fc in dl.vectors.FeatureCollection.list() if patch_product_id in fc.id][0]
fc = dl.vectors.FeatureCollection(fc_id)
region = gpd.read_file(roi_file)['geometry']
features = []
for elem in tqdm(fc.filter(region).features()):
features.append(elem.geojson)
results = gpd.GeoDataFrame.from_features(features)
if len(results) == 0:
print("No results found for", product_name)
else:
basepath = os.path.join('../data/outputs/', patch_model_version)
print("Saving to", basepath)
if not os.path.exists(basepath):
os.makedirs(basepath)
results.to_file(f"{basepath}/{product_name}.geojson", driver='GeoJSON')
print(len(features), 'features found')
###Output
_____no_output_____ |
Task 7_Numerical_and_Textual_Analysis_of_Stock_Market_Prices_.ipynb | ###Markdown
Stock Market Prediction using Numerical and Textual Analysis● Objective: Create a hybrid model for stock price/performance prediction using numerical analysis of historical stock prices, and sentimental analysis of news headlines● Stock to analyze and predict - SENSEX (S&P BSE SENSEX)● Download historical stock prices from finance.yahoo.com **I have used Auto-ARIMA model to make stock market prices predictions using the historical stock prices data. In the sentiment analysis model, I have made use of different machine learning algorithms-Random Forest Regressor, LightGBM, Adaboost and Xgboost- to make the predictions.**
###Code
from google.colab import files
uploaded = files.upload()
from google.colab import files
uploaded = files.upload()
import os
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import itertools
from statsmodels.tsa.stattools import adfuller, acf, pacf
from statsmodels.tsa.arima_model import ARIMA
import nltk
import re
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('vader_lexicon')
from textblob import TextBlob
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from nltk.stem.porter import PorterStemmer
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor
import xgboost
import lightgbm
!pip install pmdarima
###Output
Collecting pmdarima
[?25l Downloading https://files.pythonhosted.org/packages/be/62/725b3b6ae0e56c77534de5a8139322e7b863ca53fd5bd6bd3b7de87d0c20/pmdarima-1.7.1-cp36-cp36m-manylinux1_x86_64.whl (1.5MB)
[K |████████████████████████████████| 1.5MB 2.4MB/s
[?25hRequirement already satisfied: urllib3 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.4.1)
Collecting statsmodels<0.12,>=0.11
[?25l Downloading https://files.pythonhosted.org/packages/cb/83/540fd83238a18abe6c2d280fa8e489ac5fcefa1f370f0ca1acd16ae1b860/statsmodels-0.11.1-cp36-cp36m-manylinux1_x86_64.whl (8.7MB)
[K |████████████████████████████████| 8.7MB 6.9MB/s
[?25hRequirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.1.2)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.16.0)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (0.22.2.post1)
Collecting setuptools<50.0.0
[?25l Downloading https://files.pythonhosted.org/packages/c3/a9/5dc32465951cf4812e9e93b4ad2d314893c2fa6d5f66ce5c057af6e76d85/setuptools-49.6.0-py3-none-any.whl (803kB)
[K |████████████████████████████████| 808kB 47.9MB/s
[?25hRequirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from pmdarima) (1.18.5)
Collecting Cython<0.29.18,>=0.29
[?25l Downloading https://files.pythonhosted.org/packages/e7/d7/510ddef0248f3e1e91f9cc7e31c0f35f8954d0af92c5c3fd4c853e859ebe/Cython-0.29.17-cp36-cp36m-manylinux1_x86_64.whl (2.1MB)
[K |████████████████████████████████| 2.1MB 40.2MB/s
[?25hRequirement already satisfied: patsy>=0.5 in /usr/local/lib/python3.6/dist-packages (from statsmodels<0.12,>=0.11->pmdarima) (0.5.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pmdarima) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.5->statsmodels<0.12,>=0.11->pmdarima) (1.15.0)
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
Installing collected packages: statsmodels, setuptools, Cython, pmdarima
Found existing installation: statsmodels 0.10.2
Uninstalling statsmodels-0.10.2:
Successfully uninstalled statsmodels-0.10.2
Found existing installation: setuptools 50.3.0
Uninstalling setuptools-50.3.0:
Successfully uninstalled setuptools-50.3.0
Found existing installation: Cython 0.29.21
Uninstalling Cython-0.29.21:
Successfully uninstalled Cython-0.29.21
Successfully installed Cython-0.29.17 pmdarima-1.7.1 setuptools-49.6.0 statsmodels-0.11.1
###Markdown
TIME SERIES ANALYSIS
###Code
df_prices = pd.read_csv('BSESN.csv')
print(df_prices.head())
print(df_prices.size)
#Converting Date column to datetime datatype
df_prices['Date'] = pd.to_datetime(df_prices['Date'])
df_prices.info()
df_prices.dropna(inplace = True)
plt.figure(figsize=(10, 6))
df_prices['Close'].plot()
plt.ylabel('Close')
#Plotting moving average
close = df_prices['Close']
ma = close.rolling(window = 50).mean()
std = close.rolling(window = 50).std()
plt.figure(figsize=(10, 6))
df_prices['Close'].plot(color = 'b', label = 'Close')
ma.plot(color = 'r', label = 'Rolling Mean')
std.plot(label = 'Rolling Standard Deviation')
plt.legend()
#Plotting returns
returns = close / close.shift(1) - 1
plt.figure(figsize = (10,6))
returns.plot(label='Return', color = 'g')
plt.title("Returns")
train = df_prices[:1000]
test = df_prices[1000:]
#Stationarity test
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = timeseries.rolling(20).mean()
rolstd = timeseries.rolling(20).std()
#Plot rolling statistics:
plt.figure(figsize = (10,8))
plt.plot(timeseries, color = 'y', label = 'original')
plt.plot(rolmean, color = 'r', label = 'rolling mean')
plt.plot(rolstd, color = 'b', label = 'rolling std')
plt.xlabel('Date')
plt.legend()
plt.title('Rolling Mean and Standard Deviation', fontsize = 20)
plt.show(block = False)
print('Results of dickey fuller test')
result = adfuller(timeseries, autolag = 'AIC')
labels = ['ADF Test Statistic','p-value','#Lags Used','Number of Observations Used']
for value,label in zip(result, labels):
print(label+' : '+str(value) )
if result[1] <= 0.05:
print("Strong evidence against the null hypothesis(Ho), reject the null hypothesis. Data is stationary")
else:
print("Weak evidence against null hypothesis, time series is non-stationary ")
test_stationarity(train['Close'])
train_log = np.log(train['Close'])
test_log = np.log(test['Close'])
mav = train_log.rolling(24).mean()
plt.figure(figsize = (10,6))
plt.plot(train_log)
plt.plot(mav, color = 'red')
train_log.dropna(inplace = True)
test_log.dropna(inplace = True)
test_stationarity(train_log)
train_log_diff = train_log - mav
train_log_diff.dropna(inplace = True)
test_stationarity(train_log_diff)
#Using auto arima to make predictions using log data
from pmdarima import auto_arima
model = auto_arima(train_log, trace = True, error_action = 'ignore', suppress_warnings = True)
model.fit(train_log)
predictions = model.predict(n_periods = len(test))
predictions = pd.DataFrame(predictions,index = test_log.index,columns=['Prediction'])
plt.plot(train_log, label='Train')
plt.plot(test_log, label='Test')
plt.plot(predictions, label='Prediction')
plt.title('BSESN Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Actual Stock Price')
#Calculating error
rms = np.sqrt(mean_squared_error(test_log,predictions))
print("RMSE : ", rms)
###Output
RMSE : 0.0759730171376019
###Markdown
TEXTUAL ANALYSIS
###Code
cols = ['Date','Category','News']
df_news = pd.read_csv('india-news-headlines.csv', names = cols)
df_news
df_news.drop(0, inplace=True)
df_news.drop('Category', axis = 1, inplace=True)
df_news.info()
#Converting data type of Date column
df_news['Date'] = pd.to_datetime(df_news['Date'],format= '%Y%m%d')
df_news
#Grouping the headlines for each day
df_news['News'] = df_news.groupby(['Date']).transform(lambda x : ' '.join(x))
df_news = df_news.drop_duplicates()
df_news.reset_index(inplace = True, drop = True)
df_news
df_news['News']
#Cleaning headlines
c = []
for i in range(0,len(df_news['News'])):
news = re.sub('[^a-zA-Z]',' ',df_news['News'][i])
news = news.lower()
news = news.split()
news = [ps.stem(word) for word in news if not word in set(stopwords.words('english'))]
news=' '.join(news)
c.append(news)
df_news['News'] = pd.Series(c)
df_news
#Functions to get the subjectivity and polarity
def getSubjectivity(text):
return TextBlob(text).sentiment.subjectivity
def getPolarity(text):
return TextBlob(text).sentiment.polarity
#Adding subjectivity and polarity columns
df_news['Subjectivity'] = df_news['News'].apply(getSubjectivity)
df_news['Polarity'] = df_news['News'].apply(getPolarity)
df_news
plt.figure(figsize = (10,6))
df_news['Polarity'].hist(color = 'purple')
plt.figure(figsize = (10,6))
df_news['Subjectivity'].hist(color = 'blue')
#Adding sentiment score to df_news
sia = SentimentIntensityAnalyzer()
df_news['Compound'] = [sia.polarity_scores(v)['compound'] for v in df_news['News']]
df_news['Negative'] = [sia.polarity_scores(v)['neg'] for v in df_news['News']]
df_news['Neutral'] = [sia.polarity_scores(v)['neu'] for v in df_news['News']]
df_news['Positive'] = [sia.polarity_scores(v)['pos'] for v in df_news['News']]
df_news
df_merge = pd.merge(df_prices, df_news, how='inner', on='Date')
df_merge
df = df_merge[['Close','Subjectivity', 'Polarity', 'Compound', 'Negative', 'Neutral' ,'Positive']]
df
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler()
new_df = pd.DataFrame(sc.fit_transform(df))
new_df.columns = df.columns
new_df.index = df.index
new_df.head()
X = new_df.drop('Close', axis=1)
y =new_df['Close']
X.head()
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 0)
x_train.shape
x_train[:10]
rf = RandomForestRegressor()
rf.fit(x_train, y_train)
prediction=rf.predict(x_test)
print(prediction[:10])
print(y_test[:10])
print(mean_squared_error(prediction,y_test))
adb = AdaBoostRegressor()
adb.fit(x_train, y_train)
predictions = adb.predict(x_test)
print(mean_squared_error(predictions, y_test))
from sklearn.tree import DecisionTreeRegressor
dec_tree = DecisionTreeRegressor()
dec_tree.fit(x_train, y_train)
predictions = dec_tree.predict(x_test)
print(predictions[:10])
print(y_test[:10])
print(mean_squared_error(predictions,y_test))
lgb = lightgbm.LGBMRegressor()
lgb.fit(x_train, y_train)
predictions = lgb.predict(x_test)
print(mean_squared_error(predictions,y_test))
xgb = xgboost.XGBRegressor()
xgb.fit(x_train, y_train)
predictions = xgb.predict(x_test)
print(mean_squared_error(predictions,y_test))
###Output
0.043341490145982466
|
linearclassiferPytorch.ipynb | ###Markdown
Linear Classifier with PyTorch Before you use a Deep neural network to solve the classification problem, it 's a good idea to try and solve the problem with the simplest method. You will need the dataset object from the previous section.In this lab, we solve the problem with a linear classifier. You will be asked to determine the maximum accuracy your linear classifier can achieve on the validation data for 5 epochs. We will give some free parameter values if you follow the instructions you will be able to answer the quiz. Just like the other labs there are several steps, but in this lab you will only be quizzed on the final result. Table of Contents Download data Imports and Auxiliary Functions Dataset Class Transform Object and Dataset Object QuestionEstimated Time Needed: 25 min Download Data In this section, you are going to download the data from IBM object storage using wget, then unzip them. wget is a command the retrieves content from web servers, in this case its a zip file. Locally we store the data in the directory /resources/data . The -p creates the entire directory tree up to the given directory. First, we download the file that contains the images, if you dint do this in your first lab uncomment:
###Code
!mkdir /content/data
!wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/images/concrete_crack_images_for_classification.zip -P /content/data
###Output
--2020-05-18 08:58:15-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0321EN/data/images/concrete_crack_images_for_classification.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 245259777 (234M) [application/zip]
Saving to: ‘/content/data/concrete_crack_images_for_classification.zip’
concrete_crack_imag 100%[===================>] 233.90M 45.6MB/s in 5.2s
2020-05-18 08:58:21 (44.9 MB/s) - ‘/content/data/concrete_crack_images_for_classification.zip’ saved [245259777/245259777]
###Markdown
We then unzip the file, this ma take a while:
###Code
!unzip -q /content/data/concrete_crack_images_for_classification.zip -d /content/data
###Output
_____no_output_____
###Markdown
We then download the files that contain the negative images: Imports and Auxiliary Functions The following are the libraries we are going to use for this lab:
###Code
from PIL import Image
import matplotlib.pyplot as plt
import os
import glob
import torch
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
import torch.nn as nn
from torch import optim
###Output
_____no_output_____
###Markdown
Dataset Class In this section, we will use the previous code to build a dataset class. As before, make sure the even samples are positive, and the odd samples are negative. If the parameter train is set to True, use the first 30 000 samples as training data; otherwise, the remaining samples will be used as validation data. Do not forget to sort your files so they are in the same order.
###Code
class Dataset(Dataset):
# Constructor
def __init__(self,transform=None,train=True):
directory="/content/data"
positive="Positive"
negative="Negative"
positive_file_path=os.path.join(directory,positive)
negative_file_path=os.path.join(directory,negative)
positive_files=[os.path.join(positive_file_path,file) for file in os.listdir(positive_file_path) if file.endswith(".jpg")]
positive_files.sort()
negative_files=[os.path.join(negative_file_path,file) for file in os.listdir(negative_file_path) if file.endswith(".jpg")]
negative_files.sort()
number_of_samples=len(positive_files)+len(negative_files)
self.all_files=[None]*number_of_samples
self.all_files[::2]=positive_files
self.all_files[1::2]=negative_files
# The transform is goint to be used on image
self.transform = transform
#torch.LongTensor
self.Y=torch.zeros([number_of_samples]).type(torch.LongTensor)
self.Y[::2]=1
self.Y[1::2]=0
if train:
self.all_files=self.all_files[0:30000]
self.Y=self.Y[0:30000]
self.len=len(self.all_files)
else:
self.all_files=self.all_files[30000:]
self.Y=self.Y[30000:]
self.len=len(self.all_files)
# Get the length
def __len__(self):
return self.len
# Getter
def __getitem__(self, idx):
image=Image.open(self.all_files[idx])
y=self.Y[idx]
# If there is any transform method, apply it onto the image
if self.transform:
image = self.transform(image)
return image, y
###Output
_____no_output_____
###Markdown
Transform Object and Dataset Object Create a transform object, that uses the Compose function. First use the transform ToTensor() and followed by Normalize(mean, std). The value for mean and std are provided for you.
###Code
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
# transforms.ToTensor()
#transforms.Normalize(mean, std)
#transforms.Compose([])
transform =transforms.Compose([
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomPerspective(distortion_scale=0.5, p=0.5, interpolation=3, fill=0),
transforms.ToTensor(),
transforms.Normalize(mean, std),
])
###Output
_____no_output_____
###Markdown
Create object for the training data dataset_train and validation dataset_val. Use the transform object to convert the images to tensors using the transform object:
###Code
dataset_train=Dataset(transform=transform,train=True)
dataset_val=Dataset(transform=transform,train=False)
###Output
_____no_output_____
###Markdown
We can find the shape of the image:
###Code
dataset_train[0][0].shape
###Output
_____no_output_____
###Markdown
We see that it's a color image with three channels:
###Code
size_of_image=3*227*227
size_of_image
###Output
_____no_output_____
###Markdown
Question Create a custom module for Softmax for two classes,called model. The input size should be the size_of_image, you should record the maximum accuracy achieved on the validation data for the different epochs. For example if the 5 epochs the accuracy was 0.5, 0.2, 0.64,0.77, 0.66 you would select 0.77. Train the model with the following free parameter values: Parameter Values learning rate:0.1 momentum term:0.1 batch size training:1000 Loss function:Cross Entropy Loss epochs:5 set: torch.manual_seed(0)
###Code
torch.manual_seed(0)
###Output
_____no_output_____
###Markdown
Custom Module:
###Code
class Softmax(nn.Module):
def __init__(self, in_size, out_size):
super(Softmax, self).__init__()
self.in_size = in_size
self.out_size = out_size
self.fc1 = nn.Linear(in_size, out_size)
def forward(self, x):
x = x.view(-1, 227*227*3)
out = self.fc1(x)
return out
###Output
_____no_output_____
###Markdown
Model Object:
###Code
model = Softmax(size_of_image, 2)
###Output
_____no_output_____
###Markdown
Optimizer:
###Code
optimizer = optim.Adam(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Criterion:
###Code
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Data Loader Training and Validation:
###Code
train_loader = torch.utils.data.DataLoader(dataset=dataset_train, batch_size=1000, shuffle=True)
validation_loader = torch.utils.data.DataLoader(dataset=dataset_val, batch_size=20)
###Output
_____no_output_____
###Markdown
Train Model with 5 epochs, should take 35 minutes:
###Code
n_epochs = 5
for epoch in range(n_epochs):
loss = 0.0
for i, data in enumerate(train_loader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
loss += loss.item()
if i%1000 == 0:
print("epoch: {} loss: {}".format(epoch, loss))
loss = 0.0
correct = 0
total = 0
with torch.no_grad():
for data in validation_loader:
images, labels = data
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy: {}'.format(100 * correct / total))
###Output
_____no_output_____ |
lectures/Matplotlib - Graph Types.ipynb | ###Markdown
Matplotlib Cont. Graph Types--- HistogramsWe've seen them a bit already, but lets dive a little deeper.One thing with historgrams that can be useful is viewing normalized values over total counts. With a normalized view we can see the percentage of the population that falls into a given bucket, sometimes easing the interpretation of the graph.`hist(x, bins=None, range=None, density=None, weights=None, cumulative=False, bottom=None, histtype='bar', align='mid', orientation='vertical', rwidth=None, log=False, color=None, label=None, stacked=False, normed=None, hold=None, data=None, **kwargs)`
###Code
# Histograms
#%matplotlib qt
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18.5, 10.5)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
#ax3 = fig.add_subplot(2, 2, 3)
data = np.random.normal(size=1000)
ax1.hist(data) # hist() on a axes or plot will create a histogram
"""Here we set normed=True, so the data is probability based.
In addition we set it to a cumulative, so the hist now represents a discrete CDF of the data"""
ax2.hist(data,density=True)
plt.show()
###Output
_____no_output_____
###Markdown
Line PlotsOne thing we've mentioned previously is how scale can affect our perception of data. Here we see how we can alter the scaling of our data in a line plot to get a deeper understanding of what is going on.
###Code
#Line Plots
fig = plt.figure(figsize=(18.5, 10.5)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
#ax3 = fig.add_subplot(2, 2, 3)
graph_gen = lambda x: (1/4 * np.power(x, 2)) + 5
data = [graph_gen(x) for x in range(0,100)]
data2 = data + (10 * np.random.randn(100))
ax1.plot(data, color='b') # hist() on a axes or plot will create a histogram
ax1.plot(data2, color='r')
ax2.plot(data)
ax2.plot(data2, color='r')
ax2.set_yscale('log', basey=2) # This tells us to plot on a log scale (basex or basey for log)
plt.show()
###Output
_____no_output_____
###Markdown
Bar PlotsBarplots allow us to plot counts of categorical data for comparative analysis.
###Code
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(10, 5)) #We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(1, 1, 1)
#ax3 = fig.add_subplot(2, 2, 3)
"""bar(x, height, width, bottom, *, align='center', **kwargs)"""
labels = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
values = np.random.randint(0,50, len(labels))
ax1.bar(labels, values, width = .8)
plt.show()
###Output
_____no_output_____
###Markdown
Scatter PlotsScatterplots enable us to view data with multiple dimensions.`matplotlib.pyplot.scatter(x, y, s=None, c=None, marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, verts=None, edgecolors=None, hold=None, data=None, **kwargs)`
###Code
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18.5, 10.5)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
#ax3 = fig.add_subplot(2, 2, 3)
y = np.arange(100) + 5 * np.random.normal(loc = 0, scale = 3, size = 100)
x = np.arange(100)
c = ['r'] * 50 + ['g'] * 50 # We can define the color for each point in our dataset
ax1.scatter(x, y, c=c)
x = np.random.rand(50)
y = np.random.rand(50)
colors = np.random.rand(50) # We can pass in floats for colors, and matplotlib will convert it to colors
area = np.pi * (15 * np.random.rand(50))**2 # We can also modify the size of the markers
ax2.scatter(x, y, s=area, c=colors, alpha=0.5) # Remember alpha is the transparency of the elements
plt.show()
###Output
_____no_output_____
###Markdown
Inclass Work: Problem 1Scatterplot CitiesCity: - Median Income - Average Age - Population Plot each city by it's median income and avg age, with the size of each city designated by population**Bonus - color each city by a region (east, west, south, midwest)**
###Code
import random
income = np.random.uniform(20000, 50000, size=100)
age = np.random.normal(loc=50, scale=10, size=100)
median_income = np.median(income)
# print(avg_age)
# print(median_income)
pop = np.random.randint(1, 10, size=100) # IN millions
region = [random.choice(['east', 'west', 'south', 'midwest']) for i in range(100)]
reg_map = {'east': 'g', 'west': 'b', 'south': 'r', 'midwest': 'orange'}
cols = [reg_map[reg] for reg in region]
fig = plt.figure(figsize=(10, 8)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(1, 1, 1)
ax1.scatter(income, age, s=pop*10, c=cols)
# for curr_region in reg_map.keys():
# curr_age = age[region == curr_region]
# curr_income = income[region==curr_region]
# print(curr_age.shape)
# print(curr_income.shape)
pop - np.square(pop)
plt.xlabel('Income')
plt.ylabel('Age')
plt.title('Median Income per Age in Random City Populations')
plt.show()
###Output
(0, 100)
(0, 100)
(0, 100)
(0, 100)
(0, 100)
(0, 100)
(0, 100)
(0, 100)
###Markdown
Violin and Box PlotsViolin/Box plots enable us to view basic statistics for a given distribution of data`Axes.boxplot(x, notch=None, sym=None, vert=None, whis=None, positions=None, widths=None,patch_artist=None, bootstrap=None, usermedians=None, conf_intervals=None, meanline=None,showmeans=None, showcaps=None, showbox=None, showfliers=None, boxprops=None, labels=None,flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None,manage_xticks=True, autorange=False, zorder=None)`
###Code
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18.5, 10.5)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
ax4 = fig.add_subplot(2, 2, 4)
data = np.arange(100) + 5 * np.random.normal(loc = 0, scale = 3, size = 100)
ax1.boxplot(data, whis=.5) # This lets us define the # of sd for the whiskers
ax3.violinplot(data)
data2 = [data, np.random.normal(loc = 50, scale = 20, size = 1000)] #Can plot multiple boxes at once
# bootstrap - lets us calculate the confidence interval for notches via bootstrapping, define # of iterations
# usermedians - Can manually define medians to use for the data
ax2.boxplot(data2, notch = True)
ax4.violinplot(data2)
plt.show()
###Output
_____no_output_____
###Markdown
Pie PlotsPie plots are similar to bar plots, but demonstrate how each category takes up a percentage of the total population (100%)`Axes.pie(x, explode=None, labels=None, colors=None, autopct=None, pctdistance=0.6, shadow=False, labeldistance=1.1, startangle=None, radius=None, counterclock=True, wedgeprops=None, textprops=None, center=(0, 0), frame=False)`
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18.5, 10.5)) #We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
sizes = [15, 30, 20, 10]
ax1.pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True)
ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
ax2.pie(sizes, labels=labels, autopct='%1.0f%%', shadow=True) #without equal axis
#autopct - a string formatting for percentages '%{#sig}.{#sig}%%' - This can also be a function
explode = (0, 0.2, 0, 0)
ax3.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True) #without equal axis
ax3.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Polar Plots
###Code
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18.5, 10.5)) # We can define the figure on init with the figsize parameter
ax1 = fig.add_subplot(1, 1, 1, projection='polar')
# Compute pie slices
N = 20
num_labels = 10
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False) # Represents rotation
radii = 10 * np.random.rand(N) # Represents quantity
width = np.pi / 4 * np.random.rand(N)
label_bins = np.linspace(np.min(radii), np.max(radii) + 1, num_labels+1) #create 4 bins from min to max
bars = ax1.bar(theta, radii, width=width, bottom=0.0)
# Use custom colors and opacity
labels = list(range(num_labels))
print(labels)
label_bars = [0] * num_labels # setup labels for elements needed
for r, bar in zip(radii, bars):
bar.set_facecolor(plt.cm.viridis(r / 10.))
bar.set_alpha(0.5)
ix = 0
while r > label_bins[ix + 1]:
ix += 1
bar.set_label(ix)
label_bars[ix] = bar
ax1.legend(label_bars, labels, loc="best") # Only set 4 labels to reflect the breakdown of values
ax1.set_xticklabels(['E', '', 'N', '', 'W', '', 'S'])
plt.show()
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
_build/jupyter_execute/ipynb/03b-matematica-discreta.ipynb | ###Markdown
Fundamentos de Matemática Discreta com Python - Parte 2 Controle de fluxo: condicionais `if`, `elif` e `else`Em Python, como na maioria das linguagens, o operador `if` ("se") serve para tratar situações quando um bloco de instruções de código precisa ser executado apenas se uma dada condição estabelecida for avaliada como verdadeira. Um bloco condicional é escrito da seguinte forma: ```pythonif condição: faça algo```Este bloco diz basicamente o seguinte: "faça algo se a condição for verdadeira". Vejamos alguns exemplos.
###Code
if 2 > 0: # a condição é 'True'
print("2 é maior do que 0!")
2 > 0 # esta é a condição que está sendo avaliada
if 2 < 1: # nada é impresso porque a condição é 'False'
print("2 é maior do que 0!")
2 < 1 # esta é a condição que está sendo avaliada
###Output
_____no_output_____
###Markdown
A condição pode ser formada de diversas formas desde que possa ser avaliada como `True` ou `False`.
###Code
x, y = 2, 4
if x < y:
print(f'{x} < {y}')
###Output
2 < 4
###Markdown
A estrutura condicional pode ser ampliada com um ou mais `elif` ("ou se") e com `else` (senão). Cada `elif`, uma redução de *else if*, irá testar uma condição adicional se a condição relativa a `if` for `False`. Se alguma delas for testada como `True`, o bloco de código correspondende será executado. Caso contrário, a decisão do interpretador será executar o bloco que acompanhará `else`. **Exemplo:** teste da tricotomia. Verificar se um número é $>$, $<$ ou $= 0$.
###Code
x = 4.1 # número para teste
if x < 0: # se
print(f'{x} < 0')
elif x > 0: # ou se
print(f'{x} > 0')
else: # senão
print(f'{x} = 0')
###Output
4.1 > 0
###Markdown
**Exemplo:** Considere o conjunto de classificações sanguíneas ABO (+/-) $$S = \{\text{A+}, \text{A-}, \text{B+}, \text{B-}, \text{AB+}, \text{AB-}, \text{O+}, \text{O-}\}$$Se em um experimento aleatório, $n$ pessoas ($n \geq 500$) diferentes entrassem por um hospital em um único dia, qual seria a probabilidade de $p$ entre as $n$ pessoas serem classificadas como um(a) doador(a) universal (sangue $\text{O-}$) naquele dia? Em seguida, estime a probabilidade das demais.
###Code
# 'randint' gera inteiros aleatoriamente
from random import randint
# número de pessoas
n = 500
# associa inteiros 0-7 ao tipo sanguíneo
tipos = [i for i in range(0,8)]
sangue = dict(zip(tipos,['A+','A-','B+','B-','AB+','AB-','O+','O-']))
# primeira pessoa
i = randint(0,8)
# grupo sanguíneo
s = []
# repete n vezes
for _ in range(0,n):
if i == 0:
s.append(0)
elif i == 1:
s.append(1)
elif i == 2:
s.append(2)
elif i == 3:
s.append(3)
elif i == 4:
s.append(4)
elif i == 5:
s.append(5)
elif i == 6:
s.append(6)
else:
s.append(7)
i = randint(0,7) # nova pessoa
# calcula a probabilidade do tipo p em %.
# Seria necessário definir uma lambda?
prob = lambda p: p/n*100
# armazena probabilidades no dict P
P = {}
for tipo in tipos:
P[tipo] = prob(s.count(tipo))
if sangue[tipo] == 'O-':
print('A probabilidade de ser doador universal é de {0:.2f}%.'.format(P[tipo]))
else:
print('A probabilidade de ser {0:s} é de {1:.2f}%.'.format(sangue[tipo],P[tipo]))
###Output
A probabilidade de ser A+ é de 11.60%.
A probabilidade de ser A- é de 12.80%.
A probabilidade de ser B+ é de 10.80%.
A probabilidade de ser B- é de 12.60%.
A probabilidade de ser AB+ é de 14.80%.
A probabilidade de ser AB- é de 12.20%.
A probabilidade de ser O+ é de 13.80%.
A probabilidade de ser doador universal é de 11.40%.
###Markdown
ConjuntosAs estruturas `set` (conjunto) são úteis para realizar operações com conjuntos.
###Code
set(['a','b','c']) # criando por função
{'a','b','c'} # criando de modo literal
{1,2,2,3,3,4,4,4} # 'set' possui unicidade de elementos
###Output
_____no_output_____
###Markdown
União de conjuntosConsidere os seguintes conjuntos.
###Code
A = {1,2,3}
B = {3,4,5}
C = {6}
A.union(B) # união
A | B # união com operador alternativo ('ou')
###Output
_____no_output_____
###Markdown
Atualização de conjuntos (união)A união *in-place* de dois conjuntos pode ser feita com `update`.
###Code
C
C.update(B) # C é atualizado com elementos de B
C
C.union(A) # conjunto união com A
C # os elementos de A não foram atualizados em C
###Output
_____no_output_____
###Markdown
A atualização da união possui a seguinte forma alternativa com `|=`.
###Code
C |= A # elementos de A atualizados em C
C
###Output
_____no_output_____
###Markdown
Interseção de conjuntos
###Code
A.intersection(B) # interseção
A & B # interseção com operador alternativo ('e')
###Output
_____no_output_____
###Markdown
Atualização de conjuntos (interseção)A interseção *in-place* de dois conjuntos pode ser feita com `intersection_update`.
###Code
D = {1, 2, 3, 4}
E = {2, 3, 4, 5}
D.intersection(E) # interseção com E
D # D inalterado
D.intersection_update(E)
D # D alterado
###Output
_____no_output_____
###Markdown
A atualização da interseção possui a seguinte forma alternativa com `&=`.
###Code
D &= E
D
###Output
_____no_output_____
###Markdown
Diferença entre conjuntos
###Code
A
D
A.difference(D) # apenas elementos de A
D.difference(A) # apenas elementos de D
A - D # operador alternativo
D - A
###Output
_____no_output_____
###Markdown
Atualização de conjuntos (diferença)A interseção *in-place* de dois conjuntos pode ser feita com `difference_update`.
###Code
D = {1, 2, 3, 4}
E = {1, 2, 3, 5}
D
D.difference(E)
D
D.difference_update(E)
D
###Output
_____no_output_____
###Markdown
A atualização da diferença possui a seguinte forma alternativa com `-=`.
###Code
D -= E
D
###Output
_____no_output_____
###Markdown
Adição ou remoção de elementos
###Code
A
A.add(4) # adiciona 4 a A
A
B
B.remove(3) # remove 3 de B
B
###Output
_____no_output_____
###Markdown
Reinicialização de um conjunto (vazio)Podemos remover todos os elementos de um conjunto com `clear`, deixando-o em um estado vazio.
###Code
A
A.clear()
A # A é vazio
len(A) # 0 elementos
###Output
_____no_output_____
###Markdown
Diferença simétricaA diferença simétrica entre dois conjuntos $A$ e $B$ é dada pela união dos complementares relativos: $$A \triangle B = A\backslash B \cup B\backslash A$$Logo, em $A \triangle B$ estarão todos os elementos que pertencem a $A$ ou a $B$ mas não aqueles que são comuns a ambos.**Nota:** os complementares relativos $A\backslash B$ e $B\backslash A$ aqui podem ser interpretados como $A-B$ e $B-A$. Os símbolos $\backslash$ e $-$ em conjuntos podem ter sentidos diferentes em alguns contextos.
###Code
G = {1,2,3,4}
H = {3,4,5,6}
G.symmetric_difference(H) # {3,4} ficam de fora, pois são interseção
G ^ H # operador alternativo
###Output
_____no_output_____
###Markdown
Atualização de conjuntos (diferença simétrica)A diferença simétrica *in-place* de dois conjuntos pode ser feita com `symmetric_difference_update`.
###Code
G
G.symmetric_difference_update(H)
G # alterado
G ^= H # operador alternativo
G
###Output
_____no_output_____
###Markdown
ContinênciaPodemos verificar se um conjunto $A$ é subconjunto de (está contido em) outro conjunto $B$ ($A \subseteq B$) ou se $B$ é um superconjunto para (contém) $A$ ($B \supseteq A$) com `issubset` e `issuperset`.
###Code
B
C
B.issubset(C) # B está contido em C
C.issuperset(B) # C contém B
###Output
_____no_output_____
###Markdown
Subconjuntos e subconjuntos próprios Podemos usar operadores de comparação entre conjuntos para verificar continência.- $A \subseteq B$: $A$ é subconjunto de $B$- $A \subset B$: $A$ é subconjunto próprio de $B$ ($A$ possui elementos que não estão em $B$)
###Code
{1,2,3} <= {1,2,3} # subconjunto
{1,2} < {1,2,3} # subconjunto próprio
{1,2,3} > {1,2}
{1,2} >= {1,2,3}
###Output
_____no_output_____
###Markdown
DisjunçãoDois conjuntos são disjuntos se sua interseção é vazia. Podemos verificar a disjunção com `isdisjoint`
###Code
E
G
E.isdisjoint(G) 1,2,5 são comuns
D
E.isdisjoint(D)
A
E.isdisjoint(A)
###Output
_____no_output_____
###Markdown
Igualdade entre conjuntosDois conjuntos são iguais se contém os mesmos elementos.
###Code
H = {3,'a', 2}
I = {'a',2, 3}
J = {1,'a'}
H == I
H == J
{1,2,2,3} == {3,3,3,2,1} # lembre-se da unicidade
###Output
_____no_output_____
###Markdown
Compreensão de conjuntoPodemos usar `for` para criar conjuntos de maneira esperta do mesmo modo que as compreensões de lista e de dicionários. Neste caso, o funcionamento é como `list`, porém, em vez de colchetes, usamos chaves.
###Code
{e for e in range(0,10)}
{(i,v) for (i,v) in enumerate(range(0,4))}
###Output
_____no_output_____
###Markdown
Sobrecarga de operadoresEm Python, podemos realizar alguns procedimentos úteis para laços de repetição.
###Code
x = 2
x += 1 # x = 2 + 1 (incrementação)
x
y = 3
y -= 1 # y = 3 - 1 (decrementação)
y
z = 2
z *= 2 # z = 2*2
z
t = 3
t /= 3 # t = 3/3
t
###Output
_____no_output_____
###Markdown
**Exemplo:** verifique se a soma das probabilidades no `dict` `P` do experimento aleatório é realmente 100%.
###Code
s = 0
for p in P.values(): # itera sobre os valores de P
s += p # soma cumulativa
print(f'A soma de P é {s}%')
###Output
A soma de P é 100.0%
###Markdown
De modo mais Pythônico:
###Code
sum(P.values()) == 100
###Output
_____no_output_____
###Markdown
Ou ainda:
###Code
if sum(P.values()) == 100:
print(f'A soma de P é {s}%')
else:
print(f'Há erro no cálculo!')
###Output
A soma de P é 100.0%
###Markdown
Controle de fluxo: laço `while`O condicional `while` permite que um bloco de código seja repetidamente executado até que uma dada condição seja avaliada como `False`, ou o laço seja explicitamente terminado com a keyword `break`.Em laços `while`, é muito comum usar uma linha de atualização da condição usando sobrecarga de operadores.A instrução é como segue: ```pythonwhile condicao: faça isso atualize condicao```
###Code
x = 10
boom = 0
while x > boom: # não leva em conta igualdade
print(x)
x -= 1 # atualizando por decrementação
print('Boom!')
x = 5
boom = 10
while x <= boom: # leva em conta igualdade
print(x)
x += 0.5 # atualizando por incrementação
from math import sin,pi
x = 1.0
i = 1
while x**3 > 0:
if i % 100 == 0: # imprime apenas a cada 1000 repetições
print(f'Repeti {i} vezes e x = {x**3}. Contando...')
x -= 1e-3 # atualiza o decremento
i += 1 # contagem de repetição
print(f'x = {x**3}')
from math import sin,pi
x = 1.0
i = 1
while x**3 > 0:
if i % 100 == 0: # imprime apenas a cada 1000 repetições
print(f'Repeti {i} vezes e x = {x**3}. Contando...')
if i == 500:
print(f'Repeti demais. Vou parar.')
break # execução interrompida aqui
x -= 1e-3 # atualiza o decremento
i += 1 # contagem de repetição
print(f'x = {x**3}')
###Output
Repeti 100 vezes e x = 0.7314327009999998. Contando...
Repeti 200 vezes e x = 0.5139224009999996. Contando...
Repeti 300 vezes e x = 0.3444721009999996. Contando...
Repeti 400 vezes e x = 0.2170818009999996. Contando...
Repeti 500 vezes e x = 0.12575150099999965. Contando...
Repeti demais. Vou parar.
x = 0.12575150099999965
###Markdown
**Exemplo:** construa seu próprio gerador de números aleatórios para o problema da entrada de pessoas no hospital.
###Code
# exemplo simples
def meu_gerador():
nums = []
while True: # executa indefinidamente até se digitar ''
entr = input() # entrada do usuário
nums.append(entr) # armazena
if entr == '': # pare se nada mais for inserido
return list(map(int,nums[:-1])) # converte para int e remove '' da lista
# execução:
# 2; shift+ENTER; para 2
# 3; shift+ENTER; para 3
# 4; shift+ENTER; para 4
# shift+ENTER; para nada
nums = meu_gerador()
nums
###Output
2
3
4
###Markdown
**Exemplo:** verifique se a soma das probabilidades no `dict` `P` do experimento aleatório é realmente 100%.
###Code
sum(P.values())
###Output
_____no_output_____
###Markdown
`map`A função `map` serve para construir uma função que será aplicada a todos os elementos de uma sequencia. Seu uso é da seguinte forma: ```pythonmap(funcao,sequencia)``` No exemplo anterior, as entradas do usuário são armazenadas como `str`, isto é, '2', '3' e '4'. Para que elas sejam convertidas para `int`, nós executamos um *casting* em todos os elementos da sequencia usando `map`. A interpretação é a seguinte: para todo `x` pertencente a `sequencia`, aplique `funcao(x)`. Porém, para se obter o resultado desejado, devemos ainda aplicar `list` sobre o `map`.
###Code
nums = ['2','3','4']
nums
m = map(int,nums) # aplica a função 'int' aos elementos de 'num'
m
###Output
_____no_output_____
###Markdown
Observe que a resposta de `map` não é *human-readable*. Para lermos o que queremos, fazemos:
###Code
l = list(m) # aplica 'list' sobre 'map'
l
###Output
_____no_output_____
###Markdown
Podemos substituir `funcao` por uma função anônima. Assim, suponha que você quisesse enviezar os valores de entrada somando 1 a cada número. Poderíamos fazer isso como:
###Code
list(map(lambda x: x**2,l)) # eleva elementos ao quadrado
###Output
_____no_output_____
###Markdown
`filter`Podemos aplicar também como uma espécie de "filtro" para valores usando a função `filter`. No caso anterior, digamos que valores acima de 7 sejam inseridos erroneamente no gerador de números (lembre-se que no sistema sanguíneo ABO, consideramos um `dict` cujo valor das chaves é no máximo 7). Podemos, ainda assim, filtrar a lista para coletar apenas valores menores do que 7. Para tanto, definimos uma função `lambda` com este propósito.
###Code
lista_erronea = [2,9,4,6,7,1,9,10,2,4,5,2,7,7,11,7,6]
lista_erronea
f = filter(lambda x: x <= 7, lista_erronea) # aplica filtro
f
lista_corrigida = list(f) # valores > 7 excluídos
lista_corrigida
###Output
_____no_output_____
###Markdown
Exemplos com maior complexidade **Exemplo:** Podemos escrever outro gerador de forma mais complexa. Estude este caso (pouco Pythônico).
###Code
import random
la = random.sample(range(0,1000),1000) # escolhe 1000 números numa lista aleatória de 0 a 1000
teste = lambda x: -1 if x >= 8 else x # retorna x no intervalo [0,7], senão o próprio número
f = list(map(teste,la))
final = list(filter(lambda x: x != -1,f)) # remove > 8
final
###Output
_____no_output_____
###Markdown
**Exemplo:** Associando arbitrariamente o identificador de uma pessoa a um tipo sanguíneo com compreensão de `dict`.
###Code
id_pessoas = {chave:x for chave,x in enumerate(f) if x > -1} # compreensão de dicionário com if
id_pessoas
###Output
_____no_output_____ |
Dataset Imbalance/Baselines Validation/ResNext101/ResNext101 Baseline.ipynb | ###Markdown
Mount my google drive, where I stored the dataset.
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
**Download dependencies**
###Code
!pip3 install sklearn matplotlib GPUtil
!pip3 install torch torchvision
###Output
_____no_output_____
###Markdown
**Download Data** In order to acquire the dataset please navigate to:https://ieee-dataport.org/documents/cervigram-image-datasetUnzip the dataset into the folder "dataset".For your environment, please adjust the paths accordingly.
###Code
!rm -vrf "dataset"
!mkdir "dataset"
# !cp -r "/content/drive/My Drive/Studiu doctorat leziuni cervicale/cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!cp -r "cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!unzip "dataset/cervigram-image-dataset-v2.zip" -d "dataset"
###Output
_____no_output_____
###Markdown
**Constants** For your environment, please modify the paths accordingly.
###Code
# TRAIN_PATH = '/content/dataset/data/train/'
# TEST_PATH = '/content/dataset/data/test/'
TRAIN_PATH = '../dataset/data/train/'
TEST_PATH = '../dataset/data/test/'
CROP_SIZE = 260
IMAGE_SIZE = 224
BATCH_SIZE = 50
###Output
_____no_output_____
###Markdown
**Imports**
###Code
import torch as t
import torchvision as tv
import numpy as np
import PIL as pil
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torch.nn import Linear, BCEWithLogitsLoss
import sklearn as sk
import sklearn.metrics
from os import listdir
import time
import random
import GPUtil
###Output
_____no_output_____
###Markdown
**Memory Stats**
###Code
import GPUtil
def memory_stats():
for gpu in GPUtil.getGPUs():
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
memory_stats()
###Output
GPU RAM Free: 11019MB | Used: 0MB | Util 0% | Total 11019MB
GPU RAM Free: 11019MB | Used: 0MB | Util 0% | Total 11019MB
###Markdown
**Deterministic Measurements** This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation:https://pytorch.org/docs/stable/notes/randomness.html
###Code
SEED = 0
t.manual_seed(SEED)
t.cuda.manual_seed(SEED)
t.backends.cudnn.deterministic = True
t.backends.cudnn.benchmark = False
np.random.seed(SEED)
random.seed(SEED)
###Output
_____no_output_____
###Markdown
**Loading Data** The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color).
###Code
def sortByLastDigits(elem):
chars = [c for c in elem if c.isdigit()]
return 0 if len(chars) == 0 else int(''.join(chars))
def getImagesPaths(root_path):
for class_folder in [root_path + f for f in listdir(root_path)]:
category = int(class_folder[-1])
for case_folder in listdir(class_folder):
case_folder_path = class_folder + '/' + case_folder + '/'
img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)]
yield category, sorted(img_files, key = sortByLastDigits)
###Output
_____no_output_____
###Markdown
We define 3 datasets, which load 3 kinds of images: natural images, images taken through a green lens and images where the doctor applied iodine solution (which gives a dark red color). Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data.
###Code
class SimpleImagesDataset(t.utils.data.Dataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
for i in range(5):
img = pil.Image.open(img_files[i])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
def __getitem__(self, i):
x, y = self.dataset[i]
if self.transforms_x != None:
x = self.transforms_x(x)
if self.transforms_y != None:
y = self.transforms_y(y)
return x, y
def __len__(self):
return len(self.dataset)
class GreenLensImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-2])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
class RedImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-1])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
###Output
_____no_output_____
###Markdown
**Preprocess Data** Convert pytorch tensor to numpy array.
###Code
def to_numpy(x):
return x.cpu().detach().numpy()
###Output
_____no_output_____
###Markdown
Data transformations for the test and training sets.
###Code
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
transforms_train = tv.transforms.Compose([
tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30),
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.RandomHorizontalFlip(),
tv.transforms.ToTensor(),
tv.transforms.Lambda(lambda t: t.cuda()),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
transforms_test = tv.transforms.Compose([
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.ToTensor(),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0'))
###Output
_____no_output_____
###Markdown
Initialize pytorch datasets and loaders for training and test.
###Code
def create_loaders(dataset_class):
dataset_train = dataset_class(TRAIN_PATH, transforms_x_dynamic = transforms_train, transforms_y_dynamic = y_transform)
dataset_test = dataset_class(TEST_PATH, transforms_x_static = transforms_test,
transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform)
loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0)
loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0)
return loader_train, loader_test, len(dataset_train), len(dataset_test)
loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders(SimpleImagesDataset)
###Output
_____no_output_____
###Markdown
**Visualize Data** Load a few images so that we can see the effects of the data augmentation on the training set.
###Code
def plot_one_prediction(x, label, pred):
x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred)
x = np.transpose(x, [1, 2, 0])
if x.shape[-1] == 1:
x = x.squeeze()
x = x * np.array(norm_std) + np.array(norm_mean)
plt.title(label, color = 'green' if label == pred else 'red')
plt.imshow(x)
def plot_predictions(imgs, labels, preds):
fig = plt.figure(figsize = (20, 5))
for i in range(20):
fig.add_subplot(2, 10, i + 1, xticks = [], yticks = [])
plot_one_prediction(imgs[i], labels[i], preds[i])
# x, y = next(iter(loader_train_simple_img))
# plot_predictions(x, y, y)
###Output
_____no_output_____
###Markdown
**Model** Define a few models to experiment with.
###Code
def get_mobilenet_v2():
model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True)
model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
model = model.cuda()
return model
def get_vgg_19():
model = tv.models.vgg19(pretrained = True)
model = model.cuda()
model.classifier[6].out_features = 4
return model
def get_res_next_101():
model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')
model.fc.out_features = 4
model = model.cuda()
return model
def get_resnet_18():
model = tv.models.resnet18(pretrained = True)
model.fc.out_features = 4
model = model.cuda()
return model
def get_dense_net():
model = tv.models.densenet121(pretrained = True)
model.classifier.out_features = 4
model = model.cuda()
return model
class MobileNetV2_FullConv(t.nn.Module):
def __init__(self):
super().__init__()
self.cnn = get_mobilenet_v2().features
self.cnn[18] = t.nn.Sequential(
tv.models.mobilenet.ConvBNReLU(320, 32, kernel_size=1),
t.nn.Dropout2d(p = .7)
)
self.fc = t.nn.Linear(32, 4)
def forward(self, x):
x = self.cnn(x)
x = x.mean([2, 3])
x = self.fc(x);
return x
model_simple = t.nn.DataParallel(get_res_next_101())
###Output
Using cache found in /root/.cache/torch/hub/facebookresearch_WSL-Images_master
###Markdown
**Train & Evaluate** Timer utility function. This is used to measure the execution speed.
###Code
time_start = 0
def timer_start():
global time_start
time_start = time.time()
def timer_end():
return time.time() - time_start
###Output
_____no_output_____
###Markdown
This function trains the network and evaluates it at the same time. It outputs the metrics recorded during the training for both train and test. We are measuring accuracy and the loss. The function also saves a checkpoint of the model every time the accuracy is improved. In the end we will have a checkpoint of the model which gave the best accuracy.
###Code
def train_eval(optimizer, model, loader_train, loader_test, chekpoint_name, epochs):
metrics = {
'losses_train': [],
'losses_test': [],
'acc_train': [],
'acc_test': [],
'prec_train': [],
'prec_test': [],
'rec_train': [],
'rec_test': [],
'f_score_train': [],
'f_score_test': []
}
best_acc = 0
loss_fn = t.nn.CrossEntropyLoss()
try:
for epoch in range(epochs):
timer_start()
train_epoch_loss, train_epoch_acc, train_epoch_precision, train_epoch_recall, train_epoch_f_score = 0, 0, 0, 0, 0
test_epoch_loss, test_epoch_acc, test_epoch_precision, test_epoch_recall, test_epoch_f_score = 0, 0, 0, 0, 0
# Train
model.train()
for x, y in loader_train:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
loss.backward()
optimizer.step()
# memory_stats()
optimizer.zero_grad()
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_train
train_epoch_loss += (loss.item() * ratio)
train_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio)
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
train_epoch_precision += (precision * ratio)
train_epoch_recall += (recall * ratio)
train_epoch_f_score += (f_score * ratio)
metrics['losses_train'].append(train_epoch_loss)
metrics['acc_train'].append(train_epoch_acc)
metrics['prec_train'].append(train_epoch_precision)
metrics['rec_train'].append(train_epoch_recall)
metrics['f_score_train'].append(train_epoch_f_score)
# Evaluate
model.eval()
with t.no_grad():
for x, y in loader_test:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_test
test_epoch_loss += (loss * ratio)
test_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio )
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
test_epoch_precision += (precision * ratio)
test_epoch_recall += (recall * ratio)
test_epoch_f_score += (f_score * ratio)
metrics['losses_test'].append(test_epoch_loss)
metrics['acc_test'].append(test_epoch_acc)
metrics['prec_test'].append(test_epoch_precision)
metrics['rec_test'].append(test_epoch_recall)
metrics['f_score_test'].append(test_epoch_f_score)
if metrics['acc_test'][-1] > best_acc:
best_acc = metrics['acc_test'][-1]
t.save({'model': model.state_dict()}, 'checkpint {}.tar'.format(chekpoint_name))
print('Epoch {} acc {} prec {} rec {} f {} minutes {}'.format(
epoch + 1, metrics['acc_test'][-1], metrics['prec_test'][-1], metrics['rec_test'][-1], metrics['f_score_test'][-1], timer_end() / 60))
except KeyboardInterrupt as e:
print(e)
print('Ended training')
return metrics
###Output
_____no_output_____
###Markdown
Plot a metric for both train and test.
###Code
def plot_train_test(train, test, title, y_title):
plt.plot(range(len(train)), train, label = 'train')
plt.plot(range(len(test)), test, label = 'test')
plt.xlabel('Epochs')
plt.ylabel(y_title)
plt.title(title)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Plot precision - recall curve
###Code
def plot_precision_recall(metrics):
plt.scatter(metrics['prec_train'], metrics['rec_train'], label = 'train')
plt.scatter(metrics['prec_test'], metrics['rec_test'], label = 'test')
plt.legend()
plt.title('Precision-Recall')
plt.xlabel('Precision')
plt.ylabel('Recall')
###Output
_____no_output_____
###Markdown
Train a model for several epochs. The steps_learning parameter is a list of tuples. Each tuple specifies the steps and the learning rate.
###Code
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning):
for steps, learn_rate in steps_learning:
metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps)
print('Best test accuracy :', max(metrics['acc_test']))
plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate))
plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate))
###Output
_____no_output_____
###Markdown
Perform actual training.
###Code
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning):
t.cuda.empty_cache()
for steps, learn_rate in steps_learning:
metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps)
index_max = np.array(metrics['acc_test']).argmax()
print('Best test accuracy :', metrics['acc_test'][index_max])
print('Corresponding precision :', metrics['prec_test'][index_max])
print('Corresponding recall :', metrics['rec_test'][index_max])
print('Corresponding f1 score :', metrics['f_score_test'][index_max])
plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate), 'Loss')
plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate), 'Accuracy')
plot_train_test(metrics['prec_train'], metrics['prec_test'], 'Precision (lr = {})'.format(learn_rate), 'Precision')
plot_train_test(metrics['rec_train'], metrics['rec_test'], 'Recall (lr = {})'.format(learn_rate), 'Recall')
plot_train_test(metrics['f_score_train'], metrics['f_score_test'], 'F1 Score (lr = {})'.format(learn_rate), 'F1 Score')
plot_precision_recall(metrics)
do_train(model_simple, loader_train_simple_img, loader_test_simple_img, 'resnext 101', [(50, 1e-4)])
# checkpoint = t.load('/content/checkpint simple_1.tar')
# model_simple.load_state_dict(checkpoint['model'])
###Output
_____no_output_____ |
notebooks/03_loops.ipynb | ###Markdown
Repeated Execution for loop
###Code
range(10)
list(range(10))
# range(start, stop, step)
list(range(2, 11, 2))
for i in range(4):
print('hola')
for i in range(4):
print(i, 'hola')
l1 = [1, 3, 8, 5]
for i in range(len(l1)):
print('index:', i, '\tvalue:', l1[i])
for n in l1:
print(n)
n = n + 1
l1
for i in range(len(l1)):
l1[i] += 1
l1
name = input('what is your name?: ')
print('hello', name)
n1 = int(input())
n2 = int(input())
n1 + n2
eval('10*3+2')
###Output
_____no_output_____ |
notebooks/1_data_summary.ipynb | ###Markdown
Data Summary John R. Starr; [email protected] data is split into two folders/files, en/TEP.xml and fa/TEP.xml. Let's import what we need to properly search through this data:
###Code
import pandas as pd
import numpy as np
import nltk
import xml.etree.ElementTree as ET
from lxml import etree
###Output
_____no_output_____
###Markdown
Making sure that we're in the correct directory:
###Code
import os
os.getcwd()
###Output
_____no_output_____
###Markdown
First, we need to build a parser works for xml. I found documentation on etree.XMLParser() [here](https://lxml.de/api/lxml.etree.XMLParser-class.html). After some preliminary efforts in building trees, I found that some of my data has corrupted characters (or at least something along those lines). An example of one of these encoding errors can be found in the following sentence: simple caf oronary . freak show choked to death . When opened in Notepad++, the space between "caf" and "oronary" is the abbreviation NUL highlighted in black. Other problems occur later in the dataset.In order to get my parser to work, I've hand-modified the dataset. Any modifications that I made can be found in the data_modifications.txt file [here](https://github.com/Data-Science-for-Linguists-2019/Scrambling-in-English-to-Persian-Subtitles/blob/master/data_modifications.txt). The name of this edited file is 'TEP_mod.xml' and will be used for the remainder of this project.
###Code
# Creating a new parser
parser_full = etree.XMLParser(recover = True)
tree_eng = ET.parse('Private/TEP/raw/en/TEP_mod.xml', parser = parser_full)
###Output
_____no_output_____
###Markdown
Now, let's build the root and see how our data is structured. I've looked through the XML file and noticed that there is a bit of a heading that looks like this (I have added spaces between the greater/less than symbols so that it remains visible in this file): After that, we have the body character, followed by a sentence. So, we'll use .findall() to start where we want it to start, and hopefully we'll be able to get an idea of what our data looks like:
###Code
root_eng = tree_eng.getroot()
root_eng.items()
for item in root_eng.findall('./body/s')[:5]:
print(item.text)
#print(dir(item))
print(item.values())
print(item.items())
###Output
raspy breathing .
['1']
[('id', '1')]
dad .
['2']
[('id', '2')]
maybe its the wind .
['3']
[('id', '3')]
no .
['4']
[('id', '4')]
stop please stop .
['5']
[('id', '5')]
###Markdown
It looks like our data uses the ID number to mark what text comes after it. This works well, as we'll be able to match up the keys between the two files to combine them! Let's create a test dictionary in which the key is the item and the text is the value, replacing all extraneous information:
###Code
eng_lines_test = {}
for item in root_eng.findall('./body/s')[:5]:
eng_lines_test[int(str(item.values()).replace(',', '').replace("['", '').replace("']", ''))] = str(item.text.replace(',', '').replace(' .', ''))
eng_lines_test.keys()
eng_lines_test.values()
###Output
_____no_output_____
###Markdown
Awesome! Let's do the same for the Farsi text as well!
###Code
tree_far = ET.parse('Private/TEP/raw/fa/TEP.xml', parser = parser_full)
far_lines_test = {}
root_far = tree_far.getroot()
for item in root_far.findall('./body/s')[:5]:
far_lines_test[int(str(item.values()).replace(',', '').replace("['", '').replace("']", ''))] = str(item.text.replace(',', '').replace(' .', ''))
far_lines_test.keys()
far_lines_test.values()
###Output
_____no_output_____
###Markdown
All in the clear! Now, let's combine the two into a single DataFrame object!
###Code
test_DF = pd.Series(eng_lines_test).to_frame('eng').join(pd.Series(far_lines_test).to_frame('far'), how='outer')
test_DF
###Output
_____no_output_____
###Markdown
It worked! Yay! Let's apply this methodology for both of the files in full, rather than the little pieces we've been testing:
###Code
eng_lines = {}
for item in root_eng.findall('./body/s'):
eng_lines[int(str(item.values()).replace(',', '').replace("['", '').replace("']", ''))] = str(item.text.replace(',', '').replace(' .', ''))
print(len(eng_lines))
far_lines = {}
for item in root_far.findall('./body/s'):
far_lines[int(str(item.values()).replace(',', '').replace("['", '').replace("']", ''))] = str(item.text.replace(',', '').replace(' .', ''))
print(len(far_lines))
###Output
612086
###Markdown
Cool! We have the same numbers. Let's see what the DF would look like, and then add some more information that might be useful to use in the future.
###Code
full_df = pd.Series(eng_lines).to_frame('Eng').join(pd.Series(far_lines).to_frame('Far'), how='outer')
full_df.index.name = 'ID'
full_df
full_df.describe()
###Output
_____no_output_____
###Markdown
Some interesting things to point out:- It appears that the subtitles include non-spoken acts of communication, such as "raspy breathing" or "music playing". I'm not entirely sure how to remove this data, as it is not marked in any particular way.- Also, poor William in the beginning! It seems that he's having a tough time...- "Yes" and it's Persian translation "بله" are the most common words, but they do not occur at the same frequency... This may be because it is common for Persian to repeat "بله" when speaking casually.Back to the data. Let's create three more columns for both langauges: token, token count (or length), and type. This will help us navigate the data when performing analysis:
###Code
# Constructing Token columns
full_df['Eng_Tok'] = full_df['Eng'].apply(nltk.word_tokenize)
full_df['Far_Tok'] = full_df['Far'].apply(nltk.word_tokenize)
# Constructing Len columns
full_df['Eng_Len'] = full_df['Eng_Tok'].apply(len)
full_df['Far_Len'] = full_df['Far_Tok'].apply(len)
# Constructing Type columns
full_df['Eng_Types'] = full_df['Eng_Tok'].apply(set)
full_df['Far_Types'] = full_df['Far_Tok'].apply(set)
###Output
_____no_output_____
###Markdown
Seeing our resulting DF:
###Code
full_df.head()
###Output
_____no_output_____
###Markdown
How many one-word lines are there? If a significant portion of my data consists of one-word lines, then it will be pretty challenging to get results analyzing the syntax of the subtitles. I'm going to predict that there will be more Persian one-word lines, as you don't need to include the subject in Persian and can simply utter a verb.
###Code
# Seeing how many one-word lines there are in English, Farsi, and both.
eng_1word = [x for x in full_df['Eng_Len'] if x == 1]
print(len(eng_1word))
far_1word = [x for x in full_df['Far_Len'] if x == 1]
print(len(far_1word))
both_1word = [x for x in full_df if len(full_df['Eng_Len']) == 1 if len(full_df['Far_Len']) == 1]
print(len(far_1word))
# Overall length of file:
len(full_df)
###Output
59483
35506
35506
###Markdown
Interesting! It looks like my hypothesis was wrong. Every Persian line that is one word is also one word in English. However, reverse is not true. This is something I will investigate further in my data analysis. These sentences do not take up a significant portion of my data, but I am considering removing them because they do not help me with my analysis. I will be meeting with one of the instructors to get a second opinion on this matter.Let's pickle the data as a whole and move it to my private folder, pickle a small portion of the data, get a little more information on the data, and then move on to POS tagging and shallow parsing!
###Code
full_df.to_pickle('full_df.pkl') # This will be put in my private folder.
# Seeing general information about the DF
full_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 612086 entries, 1 to 612086
Data columns (total 8 columns):
Eng 612086 non-null object
Far 612086 non-null object
Eng_Tok 612086 non-null object
Far_Tok 612086 non-null object
Eng_Len 612086 non-null int64
Far_Len 612086 non-null int64
Eng_Types 612086 non-null object
Far_Types 612086 non-null object
dtypes: int64(2), object(6)
memory usage: 62.0+ MB
###Markdown
Well, we don't have any null values! This is good (and expected). What's the average sentence length for each language?
###Code
full_df.Eng_Len.value_counts()
full_df.Far_Len.value_counts()
# Average English sentence length
eng_len_tot = []
for item in full_df['Eng_Len']:
eng_len_tot.append(item)
eng_len_avg = (sum(eng_len_tot))/len(full_df)
print(eng_len_avg)
# Average Farsi sentence length
far_len_tot = []
for item in full_df['Far_Len']:
far_len_tot.append(item)
far_len_avg = (sum(far_len_tot))/len(full_df)
print(far_len_avg)
###Output
6.46187953980323
|
source/(Step1) Fill missing values (9850).ipynb | ###Markdown
Fill country name based on institution name
###Code
df[df.country==""].institution.unique()
df['country'] = df.apply(lambda x: fill_country(x['institution'], x['country']), axis=1)
df[df.country==""].institution.unique()
###Output
_____no_output_____
###Markdown
Fill institution name
###Code
df[df.institution==""].country.unique()
with open('item_dict_Federal.pkl', 'rb') as f:
filter_dict = pickle.load(f)
len(filter_dict)
records = []
for key, val in filter_dict.items():
institution = val[0]
records.append([key, institution])
df_federal = pd.DataFrame(records, columns=['id', 'institution'])
df_federal
def fill_institution(id_value, institution_name):
if institution_name != '': return institution_name
else:
items = df_federal[df_federal.id==id_value]
if len(items) > 0:
return items.iloc[0]['institution']
return ''
df['institution'] = df.apply(lambda x: fill_institution(x['id'], x['institution']), axis=1)
###Output
_____no_output_____
###Markdown
Additional data (All institutions)
###Code
with open('item_dict_all_institutions.pkl', 'rb') as f:
filter_dict = pickle.load(f)
len(filter_dict)
records = []
for key, val in filter_dict.items():
institution = val[0]
records.append([key, institution])
df_inst_all = pd.DataFrame(records, columns=['id', 'institution'])
df_inst_all
from tqdm import tqdm
additional_rows = []
for _, row in tqdm(df_inst_all.iterrows()):
if len(df[df.id==row.id])==0:
additional_rows.append([row.id, row.institution])
for i in range(len(additional_rows)):
additional_rows[i].append('Euro area')
additional_df = pd.DataFrame(additional_rows, columns=['id', 'institution', 'country'])
df = pd.concat([df, additional_df])
df
###Output
_____no_output_____
###Markdown
check missing values
###Code
df[df.country==""]
###Output
_____no_output_____
###Markdown
We can observe that a few files are not mapped to the institution filter. Those unmapped files are in the 'Unmapped' directory (ex. 'Speech_texts/PDF/2020/Unmapped').Let me explain an example of the unmapped files: https://www.bis.org/review/r180718c.htm is related to the "Federal Reserve Bank of New York".But this file is not mapped to the "Federal Reserve Bank of New York" with respect to the institution filter. Please see the below link: https://www.bis.org/cbspeeches/?cbspeeches=ZnJvbT0mdGlsbD0maW5zdGl0dXRpb25zPTIyJm9iamlkPWNic3BlZWNoZXMmcGFnZT03JnBhZ2luZ19sZW5ndGg9MTAmc29ydF9saXN0PWRhdGVfZGVzYyZ0aGVtZT1jYnNwZWVjaGVzJm1sPWZhbHNlJm1sdXJsPSZlbXB0eWxpc3R0ZXh0PQ%253D%253DYou can check that the https://www.bis.org/review/r180718c.htm does not appear in the filtering results of "Federal Reserve Bank of New York".
###Code
df[df.institution==""]
###Output
_____no_output_____
###Markdown
SAVE
###Code
df.to_csv('filter_info.csv', index=False)
###Output
_____no_output_____ |
day_3.ipynb | ###Markdown
A bit about terminology I may, or may not have mentioned a few terms that you probably could able to understand from the context. It is, however, very important to understand what they really means. You might thing of ``program`` like a really big thing with tons of lines of codes, while it might just be as simple as ```print("Hello World!")```I used to think of that a lot! It took me a lot of time to get over it.* A _program_ is just a code that does something. It could be as big as your operating system, or as just a hello world message.* Run: When you hear or read the word _run_ a computer program that means, you use a computer program to translate your human-language to a machine language, then running that machine language.* Compiled programming language: A compilation is the process of converting our code (in its human language form) into a machine code. Note this, in Python, we do not have such a thing as compilation step. The program that does this compilation stuff is called the `compiler`, in Python we do not have that compilation step (hence, no compiler), we have an interpreter that does all of the things on the fly! You will see later both of them has their pros and cons, but typing commands and instantly getting a result, that turns out to be exactly what data scientists (and others) need.* Source code. It is just your raw Python (py), or IPython (ipynb) file. * IDE and text editor. a text editor is a program that you use to edit text files. Whether those files are \*.txt files, or \*.py files. When you write small simple programs you can just use IDLE file editor or even Window' notepad, however modern text editors comes in very handy with a lot of useful features (and possibly, crappy ones). I prefer Microsoft code text editor, however there bunch of other great tools (atom, sublime, emacs, vim, and others). Modern text editors provide a very useful syntax highlighting, code completion, embedded terminals and basic debugger. -----These definitions are by no means accurate. If you need a more formal definition you should google them. As for this workshop concerns, those definitions are valid. Dataset|Name |hw1 |hw2 |hw3 |hw4 |hw5|---|----|---|---|----|----||Ahmed |7 |8 |0 |0 |5|Khalid |7.5 |1 |0 |0|Ameen |9 |10 |9 |10 |8|Amjad |6 |7 |8 |9 |10|Akram |10 |9 |5 |4 |2... In the last day, we wanted to compute the grades of some students. To do that we have used lists and nested lists, we also used function to make our code even better. Now, we want to * Use another data structure called `dicts`.* We also want to make our code even better (refactoring it, use functions for abstraction and so on.* We also want to work with some-not-very-real- world data set, that needs a bit of cleaning.From the previous day we made this function ```pythondef average(assignments): return sum(assignmnets) / len(assignments)```go ahead and use it, or use your own. This function does really one simple thing: for each students it computes it average grade. That all it does. Using this function, and considering that we store our class grades in a list of lists (where each list corresponds to a student assignments).Our computations would be like this```pythonstudent_x = average(assignments[x])```where x is an interger that corresponds to the student index with respect to the list items.needless to say this is not very great (remember you have to know the place of each student within the `assignments` list.----------------------------------------Python has another great data structure that comes in handy for such situation. It is called `dict` or dictionary. A `dict` in Python is a mapping from key => value, or a look up data structure. You can think of it like this|id |hw1|hw2|hw3|hw4|hw5||---|----|----|----|---|---|| amin | 5 | 5 | 10 | 0 | 7||tarig | 6 | 7 | 7 | 5 | 0||amjad | 0 | 9 | 10 | 5 | 6||mohamed| 5 | 4 | 5 | 2 | 8|We want to store each student grades in a data structure and call it back using a key that is easy for us (e.g., his University ID, or even his *name*). We can do that using Python's `dicts` (in later days we will find that there are other types of data structure are more suitable for this case `pandas`.------------The syntax for making a dict is very simple `{key: value}`. You can *only* use immutable types (data structures) as keys, you cannot use a list as a key.```python>>> my_dict = {"a": 1}```Now, we can do pretty all of the things we have done with other types```python>>> print(my_dict)```We can also access specific value of our dict by using its key```python>>> print(my_dict["a"])```We can also add other values to this dict```python>>> my_dict["b"] = 2>>> print(my_dict["b"]```and bunch of other cool stuffs.
###Code
students_grades = {"ahmed": [7, 5, 3, 6, 7], "khalid": [4, 5, 7, 8, 7],
"ameen": [1, 0, 0, 5, 0]}
students_grades
###Output
_____no_output_____
###Markdown
Exercise 1Try to add another item to `students_grade` e.g., add "ahmed" and his grades are [1, 5, 6, 2, 10]. Remember to not be confused between strings and variables. Now we have plenty of data types to make an even a better program. Previously when using only lists it was not very cool since you need to know that the third item of the list correpsonds to some student (you have to do the mapping yourself). Now, using a dict, we skipped that step! Can you imagine how that would be great?-----I've already made this dict of students=>grades, to save you from doing it yourself. Now, try to explore some methods (remember to use the "." notation) on that dict.```python>>> students_grades = {"ahmed": [1,3,4,5,6], "amjad": [4, 5, 6, 3, 1], "tarig": [0, 5, 10, 7, 10], "mohamed": [1, 2, 3, 0, 6], "amna": [0, 1, 5, 5, 10], "mena": [0, 0, 10, 10, 6], "ruba": [0, 0, 5, 10, 10], "khadiga": [10, 10, 10, 10, 10], "lisa": [10, 6, 8, 10, 9], "mugtaba": [7, 6, 6, 4, 4], "ramy": [10, 9, 7, 5, 3]}```You can add to this dict any other items if you would like, remeber the syntax is just```>>> students_grades[new_key] = new_value```
###Code
students_grades = {
"ahmed": [1,3,4,5,6], "amjad": [4, 5, 6, 3, 1],
"tarig": [0, 5, 10, 7, 10], "mohamed": [1, 2, 3, 0, 6], "amna": [0, 1, 5, 5, 10],
"mena": [0, 0, 10, 10, 6], "ruba": [0, 0, 5, 10, 10], "khadiga": [10, 10, 10, 10, 10],
"lisa": [10, 6, 8, 10, 9], "mugtaba": [7, 6, 6, 4, 4], "ramy": [10, 9, 7, 5, 3]
}
###Output
_____no_output_____
###Markdown
Exercise 2What is the type of students_grades values? E.g., `students_grades["ahmed"]` will return what? Using `dict`Let us start with our whole-new data structure and rewrite our program accordingly. A starter code is provided for you. Before we head to implement our program, let us recall what our program is supposed to do, and how are we going to do that `(pseudo-code)`.We want to have a main function called `compute_students_grades`, that as the names implies, computes the grades for the students. Now let us see how can we do that. I mean somebody needs to code that program, right? All I've told you is that our program can compute a student grade. That is not useful, that is more like a hope or something like that. It would be great if we were given a recipe (or made it ourselves)`compute_students_grades` recipe (or algoirthm)* Our function takes in a dictionary with keys of students names (or ids), and values as a list of their assignments grades.* For each students, we want to compute its average grade from that list of his grades.* We have to store that value in the previous in somewhere so that we can access it later. There are plenty of ways for doing this, we will discuss them later.* Remember that all your computations in Python (or any other language) are stored in your RAM. Whenever you close your Python, or your computer shuts down you lose it (you have to recompute it). It would be great if we use could store them in a persistent place e.g., your hard drive. First let us start with a toy example. Let us compute the average grade for only one student. Consider this function```pythondef nthroot(x, n): where x is the number to take its root, n is the power. return x ** 1/n``````python>>> a = nthroot(5, 4)>>> print(a)```We can have a function that has multiple arguments, and even returns multiple outputs (more on that later.)Now back to our example. We want to make a function that takes in a `dictionary` and a key and return the average grade for that key. We could actually dropped the dictionary and hard coded it but that would be a bad practice. You might want to use this code for other assignments do not you?```pythondef average(assignments, key): assignments is a dictionary print(assignments[key], type(assignments[key])) The previous line is used to give you an idea about the `assignment' variable, and its type too your code goes here... ``````python>>> ahmed = students_grade["ahmed"]>>> ahmed_grade = average(ahmed, "ahmed")>>> print(ahmed_grade)```
###Code
def nthroot(x, n):
# where x is the number to take its root,
# n is the power.
return x ** 1/n
###Output
_____no_output_____
###Markdown
Loops in PythonIn the previous toy example, our function `average` only computes the average grade for one student. To compute for the rest of the class students we need to use some loop logic. ```python>>> list_things = ["ahmed", 1, 4, "khalid"]>>> for item in list_things:>>> print(item)```This is basically how to use a loop in Python. An alternative way of using loops is like this```python>>> list_things = ["ahmed", 1, 4, "khalid"]>>> for item in range(len(list_things)):>>> print(item)>>> Which will print to you only integers from 0 - len(list_things) - 1>>> Which is not exactly what you wanted. You can use Python indexing to handle that>>> >>> Try to do this>>> for item in range(len(list_things)):>>> print(list_things[item])```The question is, can I iterate over a dictionary? Lists are actually called iterables, and so does dicts. That means you can iterate through them.```python>>> names = {"ahmed": 10, "khalid": 7, "ramy": 10}>>> for name in names:>>> print(name)```and that will _iterate_ you through `names` values. You can also iterate over keys, or values, or even pairs.```python>>> names = {"ahmed": 10, "khalid": 7, "ramy": 10}>>> Iterating through `names` keys>>> for key in names.keys():>>> print(key)>>> Or you can also iterate through the key,value pair>>> names = {"ahmed": 10, "khalid": 7, "ramy": 10}>>> Iterating through `names` keys>>> for pair in names.items():>>> print(pair)```Exercise 3
###Code
%run style.css
ls
%magic
###Output
_____no_output_____
###Markdown
深さ優先探索https://atc001.contest.atcoder.jp/tasks/dfs_a
###Code
import sys
sys.setrecursionlimit(200000)
judge = []
answer = "No"
H, W = map(int, input().split())
maze = [list(map(str, input().split())) for _ in range(H)]
def search(x,y, maze):
str_x = str(x)
str_y = str(y)
judge_number = str_x + str_y
if x < 0 or y < 0:
return
elif x > H -1 or y > W-1:
return
elif maze[x][0][y] == "#":
return
elif judge_number in judge:
return
elif maze[x][0][y] == "g":
answer = "Yes"
print(answer)
sys.exit()
else:
judge.append(judge_number)
search(x+1, y, maze)
search(x-1, y, maze)
search(x, y+1, maze)
search(x, y-1, maze)
# "S"の座標を特定する
for i in range(H):
for j in range(W):
if maze[i][0][j] == "s":
sx = i
sy = j
search(sx, sy, maze)
print(answer)
###Output
4 4
...s
####
....
..g.
No
###Markdown
Synthetic Kadomatsuhttps://atcoder.jp/contests/abc119/tasks/abc119_c
###Code
import math
import sys
import itertools
import queue
from fractions import gcd
def lcm(a, b):
return a * b // gcd(a, b)
mod = 1000000007
if __name__ == "__main__":
N, A, B, C = map(int, input().split())
l = []
for i in range(N):
l.append(int((input())))
INF = 10 ** 9
def dfs(idx, a, b, c):
if idx == N:
# 初期値a,b,c = 0となっているため、一本目の竹についても合成された扱いになってしまっている
# したがって、実際はかかっていない合成コスト30を引いている
# a,b,cのうち少なくとも一本は必ず使われる設定になっているため、30を必ず引くような実装であっても問題はない
return abs(A - a) + abs(B - b) + abs(C - c) - 30 if min(a, b, c) > 0 else INF
# 一本あたりの竹について、合成しない、Aに合成、Bに合成、Cに合成の全てのパターンを以下で試行している
no_synth = dfs(idx + 1, a, b, c)
synth_A = dfs(idx + 1, a + l[idx], b, c) + 10
synth_B = dfs(idx + 1, a, b + l[idx], c) + 10
synth_C = dfs(idx + 1, a, b, c + l[idx]) + 10
return min(no_synth, synth_A, synth_B, synth_C)
print(dfs(0,0,0,0))
###Output
5 100 90 80
98
40
30
21
80
23
###Markdown
Anti-Divisionhttps://atcoder.jp/contests/abc131/tasks/abc131_c
###Code
#最小公倍数
import fractions
A,B,C,D = map(int, input().split())
lcm_number = C*D // fractions.gcd(C, D)
all_number = B -A + 1
answer_C = B // C - (A-1) // C
answer_D = B // D - (A-1) // D
answer_lcm = B // lcm_number - (A-1) // lcm_number
print(all_number - answer_C - answer_D + answer_lcm)
###Output
4 9 2 3
2
###Markdown
###Code
import pandas as pd
data = pd.read_csv("https://raw.githubusercontent.com/NatnaelSisay/task-2/master/output.csv");
print(data.sample(10))
cleanTweet = data[['original_text', 'polarity']].rename({'original_text': 'clean_text'}, axis=1)
cleanTweet.sample(10)
cleanTweet.info()
def text_category(p):
if p > 0:
return 'positive'
elif p < 0:
return 'negative'
else:
return 'neutral'
cleanTweet['score'] = cleanTweet["polarity"].map(text_category)
import matplotlib.pyplot as plt
import seaborn as sns
fig,axis=plt.subplots(figsize=(8,6))
cleanTweet.groupby('score')['clean_text'].count().plot.bar(ax=axis)
fig,axis=plt.subplots(figsize=(8,6))
cleanTweet.groupby('score')['clean_text'].count().plot.pie(ax=axis)
#remove rows from cleanTweet where polarity = 0
cleanTweet = cleanTweet[cleanTweet['polarity'] != 0]
cleanTweet.reset_index(drop=True, inplace=True)
cleanTweet.head()
def get_score(v):
if v == 'positive':
return 1
else:
return 0
cleanTweet['scoremap'] = cleanTweet['score'].map(get_score)
cleanTweet.head()
X = cleanTweet['clean_text'] #for clean_text
y = cleanTweet['scoremap'] #for scoremap
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y)
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range=(3, 3))
X_train_cv = cv.fit_transform(x_train)
X_test_cv = cv.transform(x_test)
X_train_cv
clf = SGDClassifier()
clf.fit(X_train_cv, y_train)
predictions = clf.predict(X_test_cv)
from sklearn.metrics import confusion_matrix
results = confusion_matrix(y_test, predictions)
results
from sklearn.metrics import accuracy_score
accuracy_score(y_test, predictions)
from sklearn. metrics import classification_report
print(classification_report(y_test, predictions))
import numpy as np
import json
import glob
#Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
#spacy
import spacy
from nltk.corpus import stopwords
!pip install pyLDAvis
import nltk
nltk.download('stopwords')
stopwords = stopwords.words("english")
print(stopwords)
tweet = data[['original_text']]
print(tweet.sample(10))
def lemmatization(texts, allowed_postags=["NOUN", "ADJ", "VERB", "ADV"]):
nlp = spacy.load("en_core_web_sm", disable=["parser", "ner"])
texts_out = []
for text in texts:
doc = nlp(text)
new_text = []
for token in doc:
if token.pos_ in allowed_postags:
new_text.append(token.lemma_)
final = " ".join(new_text)
texts_out.append(final)
return (texts_out)
lemmatized_texts = lemmatization(tweet)
print (lemmatized_texts)
def gen_words(texts):
final = []
for text in texts:
new = gensim.utils.simple_preprocess(text, deacc=True)
final.append(new)
return (final)
data_words = gen_words(lemmatized_texts)
print (data_words)
###Output
_____no_output_____ |
DeployModelWithAWS/SageMaker Project.ipynb | ###Markdown
Step 6 (again) - Deploy the model for the web appNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use. - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model. - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code. - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint. - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize. (TODO) Writing inference codeBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.
###Code
!pygmentize serve/predict.py
###Output
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpickle[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36msagemaker_containers[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.optim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m LSTMClassifier
[34mfrom[39;49;00m [04m[36mutils[39;49;00m [34mimport[39;49;00m review_to_words, convert_and_pad
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load the PyTorch model from the `model_dir` directory."""[39;49;00m
[34mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[34mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: {}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = LSTMClassifier(model_info[[33m'[39;49;00m[33membedding_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m], model_info[[33m'[39;49;00m[33mvocab_size[39;49;00m[33m'[39;49;00m])
[37m# Load the store model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[37m# Load the saved word_dict.[39;49;00m
word_dict_path = os.path.join(model_dir, [33m'[39;49;00m[33mword_dict.pkl[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(word_dict_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.word_dict = pickle.load(f)
model.to(device).eval()
[34mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32minput_fn[39;49;00m(serialized_input_data, content_type):
[34mprint[39;49;00m([33m'[39;49;00m[33mDeserializing the input data.[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m content_type == [33m'[39;49;00m[33mtext/plain[39;49;00m[33m'[39;49;00m:
data = serialized_input_data.decode([33m'[39;49;00m[33mutf-8[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m data
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mRequested unsupported ContentType in content_type: [39;49;00m[33m'[39;49;00m + content_type)
[34mdef[39;49;00m [32moutput_fn[39;49;00m(prediction_output, accept):
[34mprint[39;49;00m([33m'[39;49;00m[33mSerializing the generated output.[39;49;00m[33m'[39;49;00m)
[34mreturn[39;49;00m [36mstr[39;49;00m(prediction_output)
[34mdef[39;49;00m [32mpredict_fn[39;49;00m(input_data, model):
[34mprint[39;49;00m([33m'[39;49;00m[33mInferring sentiment of input data.[39;49;00m[33m'[39;49;00m)
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m model.word_dict [35mis[39;49;00m [36mNone[39;49;00m:
[34mraise[39;49;00m [36mException[39;49;00m([33m'[39;49;00m[33mModel has not been loaded properly, no word_dict.[39;49;00m[33m'[39;49;00m)
[37m# TODO: Process input_data so that it is ready to be sent to our model.[39;49;00m
[37m# You should produce two variables:[39;49;00m
[37m# data_X - A sequence of length 500 which represents the converted review[39;49;00m
[37m# data_len - The length of the review[39;49;00m
data_X ,data_len = convert_and_pad(model.word_dict, review_to_words(input_data))
[37m# Using data_X and data_len we construct an appropriate input tensor. Remember[39;49;00m
[37m# that our model expects input data of the form 'len, review[500]'.[39;49;00m
data_pack = np.hstack((data_len, data_X))
data_pack = data_pack.reshape([34m1[39;49;00m, -[34m1[39;49;00m)
data = torch.from_numpy(data_pack)
data = data.to(device)
[37m# Make sure to put the model into evaluation mode[39;49;00m
model.eval()
[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should[39;49;00m
[37m# be a numpy array which contains a single integer which is either 1 or 0[39;49;00m
[34mwith[39;49;00m torch.no_grad():
output = model.forward(data)
result = np.round(output.numpy())
[34mreturn[39;49;00m result
###Markdown
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file. Deploying the modelNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
###Code
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
----------------------------------------------------------------------------------------!
###Markdown
Testing the modelNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
###Code
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
"""Added float() to correct
ValueError: invalid literal for int() with base 10: b'1.0'
https://stackoverflow.com/questions/1841565/valueerror-invalid-literal-for-int-with-base-10"""
results.append(int(float(predictor.predict(review_input))))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
###Output
_____no_output_____
###Markdown
As an additional test, we can try sending the `test_review` that we looked at earlier.
###Code
predictor.predict(test_review)
###Output
_____no_output_____
###Markdown
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back. Step 7 (again): Use the model for the web app> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function. Setting up a Lambda functionThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result. Part A: Create an IAM Role for the Lambda functionSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**. Part B: Create a Lambda functionNow it is time to actually create the Lambda function.Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. ```python We need to use the low-level library to interact with SageMaker since the SageMaker API is not available natively through Lambda.import boto3def lambda_handler(event, context): The SageMaker runtime is what allows us to invoke the endpoint that we've created. runtime = boto3.Session().client('sagemaker-runtime') Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', The name of the endpoint we created ContentType = 'text/plain', The data format that is expected Body = event['body']) The actual review The response is an HTTP response whose body contains the result of our inference result = response['Body'].read().decode('utf-8') return { 'statusCode' : 200, 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' }, 'body' : result }```Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
###Code
predictor.endpoint
###Output
_____no_output_____
###Markdown
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function. Setting up API GatewayNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**. https://pwm8lucztc.execute-api.us-west-2.amazonaws.com/prod Step 4: Deploying our web appNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.**TODO:** Make sure that you include the edited `index.html` file in your project submission. Now that your web app is working, trying playing around with it and see how well it works.**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review? **Answer:** Positive Example"This was a really great project! There were some issues getting the Amazon Web Services (AWS) to properly recognize the ml.p2.xlarge instances, but they were resolved. This project made me refresh my recurrent neural network knowledge and broadened my skills by using AWS." Answered - Negative Negative Example"I would not watch this movie again. I want an hour and a half of my life back. Another patron had the same sentiment and he shouted, "I want my money back!" when the credits rolled."Answered - Negative I have no doubt with further training and refining hyperparameters the model can provide better sentiment predictions. Delete the endpointRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Creating a Sentiment Analysis Web App Using PyTorch and SageMaker_Deep Learning Nanodegree Program | Deployment_---Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. General OutlineRecall the general outline for SageMaker projects using a notebook instance.1. Download or otherwise retrieve the data.2. Process / Prepare the data.3. Upload the processed data to S3.4. Train a chosen model.5. Test the trained model (typically using a batch transform job).6. Deploy the trained model.7. Use the deployed model.For this project, you will be following the steps in the general outline with some modifications. First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app. Step 1: Downloading the dataAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
--2019-05-01 19:16:51-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 42.3MB/s in 1.9s
2019-05-01 19:16:53 (42.3 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing and Processing the dataAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
###Output
IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg
###Markdown
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
###Code
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
###Output
IMDb reviews (combined): train = 25000, test = 25000
###Markdown
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
###Code
print(train_X[100])
print(train_y[100])
###Output
Very funny to watch "Beretta's Island" as kind of natural trash-film.It is like answer to Jess Franko's type of b-movie.Bodybuilders strikes back (!face to face!) to pushers.The very very very stupid strike!Action: unbelievably bad directed firing(shooting) scenes look even better than hand-to-hand fighting.Chasing scenes ridiculous.Saving beauties scenes incredibly stupid.Erotic scenes are very unerotic.The main luck of film is pretty landscapes and festival scenes.Don't miss:Arnold Schwarzenegger's joke at start of film and list of Franco Columbu's kin at the end. Special attraction: naked bosom.Almoust forgot - Franco can sing!
0
###Markdown
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
###Code
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
###Output
_____no_output_____
###Markdown
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
###Code
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
print(review_to_words(train_X[100]))
###Output
['funni', 'watch', 'beretta', 'island', 'kind', 'natur', 'trash', 'film', 'like', 'answer', 'jess', 'franko', 'type', 'b', 'movi', 'bodybuild', 'strike', 'back', 'face', 'face', 'pusher', 'stupid', 'strike', 'action', 'unbeliev', 'bad', 'direct', 'fire', 'shoot', 'scene', 'look', 'even', 'better', 'hand', 'hand', 'fight', 'chase', 'scene', 'ridicul', 'save', 'beauti', 'scene', 'incred', 'stupid', 'erot', 'scene', 'unerot', 'main', 'luck', 'film', 'pretti', 'landscap', 'festiv', 'scene', 'miss', 'arnold', 'schwarzenegg', 'joke', 'start', 'film', 'list', 'franco', 'columbu', 'kin', 'end', 'special', 'attract', 'nake', 'bosom', 'almoust', 'forgot', 'franco', 'sing']
###Markdown
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input? **Answer:** In addition to removing html formatting and tokenizing the input words `review_to_words()` also: - converts all of the input text to lowercase, removes punctuation, and prepends the word with a space in the `re.sub()` function. - splits the string created from the lowercase conversion at the spaces into individual list elements. - removes any words not found in the stopwords English dictionary with a list comprehension. The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
###Code
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Transform the dataIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews. (TODO) Create a word dictionaryTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
###Code
import numpy as np
from collections import Counter
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = Counter() # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
word_count.update(review)
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = sorted(word_count, key=word_count.get, reverse=True)
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
###Output
_____no_output_____
###Markdown
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set? **Answer:** The five most frequently appearing (tokenized) words in the training set (from highest occurrence to lowest) are: 1. movi, 516952. film, 481903. one, 277414. like, 227995. time, 16191 It makes sense that these words appear in the training set as the dataset consists of movie reviews. The word roots, 'movi' and 'film', are directly related to movies and the words: one, like, and time are all words I would expect to see in a well-written critical review.
###Code
# TODO: Use this space to determine the five most frequently appearing words in the training set.
# Have to take the smallest values as after mapping words to integers above the most common words have the lowest values,
# that is, the most common words are closer to 0 than the least common words. Running most_common(5) would give the 5
# words least likely to occur.
Counter(word_dict).most_common()[:-5:-1]
###Output
_____no_output_____
###Markdown
Save `word_dict`Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
###Code
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
###Output
_____no_output_____
###Markdown
Transform the reviewsNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
###Code
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
###Output
_____no_output_____
###Markdown
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
###Code
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print("Original length of a processed review =", train_X_len[100])
print("Padded length of a processed review =", len(train_X[100]))
print("\nBehold! The processed review:")
print(train_X[100])
###Output
Original length of a processed review = 73
Padded length of a processed review = 500
Behold! The processed review:
[ 84 12 1 915 147 369 978 3 5 881 2173 1 399 425
2 1 1282 64 214 214 1 291 1282 104 965 24 98 645
485 18 19 14 58 228 228 257 680 18 474 320 126 18
460 291 2132 18 1 217 1767 3 106 2158 1131 18 240 2748
1 336 87 3 705 3646 1 1 22 221 750 1201 1 1
2418 3646 594 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
###Markdown
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem? **Answer:** There should not be an issue using `preprocess_data` or `convert_and_pad_data` methods process both the training and testing set. Each set is a seperate object in Python and as long as we, as programmers, do not confuse the inputs the training and testing sets will not interact or affect one another. One drawback to the method we are using to process and convert the data though is we lose the original sets to compare against as the testing and training objects are being replaced in memory with their processed outputs. Step 3: Upload the data to S3As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on. Save the processed training dataset locallyIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
###Code
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
###Output
_____no_output_____
###Markdown
Uploading the training dataNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory. Step 4: Build and Train the PyTorch ModelIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects - Model Artifacts, - Training Code, and - Inference Code, each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
###Code
!pygmentize train/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch.nn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mclass[39;49;00m [04m[32mLSTMClassifier[39;49;00m(nn.Module):
[33m"""[39;49;00m
[33m This is the simple RNN model we will be using to perform Sentiment Analysis.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, embedding_dim, hidden_dim, vocab_size):
[33m"""[39;49;00m
[33m Initialize the model by settingg up the various layers.[39;49;00m
[33m """[39;49;00m
[36msuper[39;49;00m(LSTMClassifier, [36mself[39;49;00m).[32m__init__[39;49;00m()
[36mself[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=[34m0[39;49;00m)
[36mself[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)
[36mself[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=[34m1[39;49;00m)
[36mself[39;49;00m.sig = nn.Sigmoid()
[36mself[39;49;00m.word_dict = [36mNone[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m"""[39;49;00m
[33m Perform a forward pass of our model on some input.[39;49;00m
[33m """[39;49;00m
x = x.t()
lengths = x[[34m0[39;49;00m,:]
reviews = x[[34m1[39;49;00m:,:]
embeds = [36mself[39;49;00m.embedding(reviews)
lstm_out, _ = [36mself[39;49;00m.lstm(embeds)
out = [36mself[39;49;00m.dense(lstm_out)
out = out[lengths - [34m1[39;49;00m, [36mrange[39;49;00m([36mlen[39;49;00m(lengths))]
[34mreturn[39;49;00m [36mself[39;49;00m.sig(out.squeeze())
###Markdown
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
###Code
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
###Output
_____no_output_____
###Markdown
(TODO) Writing the training methodNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
###Code
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
output = model.forward(batch_X)
loss = loss_fn(output, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
###Output
_____no_output_____
###Markdown
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
###Code
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
###Output
Epoch: 1, BCELoss: 0.6986251473426819
Epoch: 2, BCELoss: 0.6900709867477417
Epoch: 3, BCELoss: 0.6833637595176697
Epoch: 4, BCELoss: 0.6766082167625427
Epoch: 5, BCELoss: 0.6689576625823974
###Markdown
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run. (TODO) Training the modelWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
###Code
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
###Output
2019-05-01 21:36:38 Starting - Starting the training job...
2019-05-01 21:36:40 Starting - Launching requested ML instances......
2019-05-01 21:37:44 Starting - Preparing the instances for training......
2019-05-01 21:38:51 Downloading - Downloading input data...
2019-05-01 21:39:22 Training - Downloading the training image...
2019-05-01 21:39:54 Training - Training image download completed. Training in progress.
[31mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[31mbash: no job control in this shell[0m
[31m2019-05-01 21:39:55,612 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[31m2019-05-01 21:39:55,635 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[31m2019-05-01 21:39:58,642 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[31m2019-05-01 21:39:58,931 sagemaker-containers INFO Module train does not provide a setup.py. [0m
[31mGenerating setup.py[0m
[31m2019-05-01 21:39:58,931 sagemaker-containers INFO Generating setup.cfg[0m
[31m2019-05-01 21:39:58,931 sagemaker-containers INFO Generating MANIFEST.in[0m
[31m2019-05-01 21:39:58,932 sagemaker-containers INFO Installing module with the following command:[0m
[31m/usr/bin/python -m pip install -U . -r requirements.txt[0m
[31mProcessing /opt/ml/code[0m
[31mCollecting pandas (from -r requirements.txt (line 1))[0m
[31m Downloading https://files.pythonhosted.org/packages/74/24/0cdbf8907e1e3bc5a8da03345c23cbed7044330bb8f73bb12e711a640a00/pandas-0.24.2-cp35-cp35m-manylinux1_x86_64.whl (10.0MB)[0m
[31mCollecting numpy (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/f6/f3/cc6c6745347c1e997cc3e58390584a250b8e22b6dfc45414a7d69a3df016/numpy-1.16.3-cp35-cp35m-manylinux1_x86_64.whl (17.2MB)[0m
[31mCollecting nltk (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/73/56/90178929712ce427ebad179f8dc46c8deef4e89d4c853092bee1efd57d05/nltk-3.4.1.zip (3.1MB)[0m
[31mCollecting beautifulsoup4 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/1d/5d/3260694a59df0ec52f8b4883f5d23b130bc237602a1411fa670eae12351e/beautifulsoup4-4.7.1-py3-none-any.whl (94kB)[0m
[31mCollecting html5lib (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/a5/62/bbd2be0e7943ec8504b517e62bab011b4946e1258842bc159e5dfde15b96/html5lib-1.0.1-py2.py3-none-any.whl (117kB)[0m
[31mRequirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (2.7.5)[0m
[31mCollecting pytz>=2011k (from pandas->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/3d/73/fe30c2daaaa0713420d0382b16fbb761409f532c56bdcc514bf7b6262bb6/pytz-2019.1-py2.py3-none-any.whl (510kB)[0m
[31mRequirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.5/dist-packages (from nltk->-r requirements.txt (line 3)) (1.11.0)[0m
[31mCollecting soupsieve>=1.2 (from beautifulsoup4->-r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/b9/a5/7ea40d0f8676bde6e464a6435a48bc5db09b1a8f4f06d41dd997b8f3c616/soupsieve-1.9.1-py2.py3-none-any.whl[0m
[31mCollecting webencodings (from html5lib->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl[0m
[31mBuilding wheels for collected packages: nltk, train
Running setup.py bdist_wheel for nltk: started[0m
[31m Running setup.py bdist_wheel for nltk: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/97/8a/10/d646015f33c525688e91986c4544c68019b19a473cb33d3b55
Running setup.py bdist_wheel for train: started
Running setup.py bdist_wheel for train: finished with status 'done'
Stored in directory: /tmp/pip-ephem-wheel-cache-8xedxlq5/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3[0m
[31mSuccessfully built nltk train[0m
[31mInstalling collected packages: pytz, numpy, pandas, nltk, soupsieve, beautifulsoup4, webencodings, html5lib, train
Found existing installation: numpy 1.15.4[0m
[31m Uninstalling numpy-1.15.4:
Successfully uninstalled numpy-1.15.4[0m
[31mSuccessfully installed beautifulsoup4-4.7.1 html5lib-1.0.1 nltk-3.4.1 numpy-1.16.3 pandas-0.24.2 pytz-2019.1 soupsieve-1.9.1 train-1.0.0 webencodings-0.5.1[0m
[31mYou are using pip version 18.1, however version 19.1 is available.[0m
[31mYou should consider upgrading via the 'pip install --upgrade pip' command.[0m
[31m2019-05-01 21:40:10,712 sagemaker-containers INFO Invoking user script
[0m
[31mTraining Env:
[0m
[31m{
"user_entry_point": "train.py",
"log_level": 20,
"output_data_dir": "/opt/ml/output/data",
"framework_module": "sagemaker_pytorch_container.training:main",
"input_dir": "/opt/ml/input",
"input_config_dir": "/opt/ml/input/config",
"resource_config": {
"network_interface_name": "ethwe",
"hosts": [
"algo-1"
],
"current_host": "algo-1"
},
"job_name": "sagemaker-pytorch-2019-05-01-21-36-37-346",
"output_dir": "/opt/ml/output",
"additional_framework_parameters": {},
"model_dir": "/opt/ml/model",
"input_data_config": {
"training": {
"RecordWrapperType": "None",
"S3DistributionType": "FullyReplicated",
"TrainingInputMode": "File"
}
},
"module_name": "train",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"num_cpus": 4,
"network_interface_name": "ethwe",
"num_gpus": 1,
"hosts": [
"algo-1"
],
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"module_dir": "s3://sagemaker-us-west-2-630072047239/sagemaker-pytorch-2019-05-01-21-36-37-346/source/sourcedir.tar.gz",
"hyperparameters": {
"epochs": 10,
"hidden_dim": 200
}[0m
[31m}
[0m
[31mEnvironment variables:
[0m
[31mSM_CHANNELS=["training"][0m
[31mSM_USER_ENTRY_POINT=train.py[0m
[31mSM_MODULE_DIR=s3://sagemaker-us-west-2-630072047239/sagemaker-pytorch-2019-05-01-21-36-37-346/source/sourcedir.tar.gz[0m
[31mSM_HP_HIDDEN_DIM=200[0m
[31mSM_HOSTS=["algo-1"][0m
[31mSM_HP_EPOCHS=10[0m
[31mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[31mSM_MODULE_NAME=train[0m
[31mSM_INPUT_DIR=/opt/ml/input[0m
[31mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[31mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[31mSM_CURRENT_HOST=algo-1[0m
[31mSM_NUM_CPUS=4[0m
[31mSM_HPS={"epochs":10,"hidden_dim":200}[0m
[31mSM_LOG_LEVEL=20[0m
[31mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[31mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[31mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"}[0m
[31mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[31mSM_FRAMEWORK_PARAMS={}[0m
[31mSM_USER_ARGS=["--epochs","10","--hidden_dim","200"][0m
[31mSM_NETWORK_INTERFACE_NAME=ethwe[0m
[31mSM_MODEL_DIR=/opt/ml/model[0m
[31mSM_OUTPUT_DIR=/opt/ml/output[0m
[31mSM_NUM_GPUS=1[0m
[31mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"epochs":10,"hidden_dim":200},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","job_name":"sagemaker-pytorch-2019-05-01-21-36-37-346","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-630072047239/sagemaker-pytorch-2019-05-01-21-36-37-346/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"ethwe","num_cpus":4,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"},"user_entry_point":"train.py"}[0m
[31mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages
[0m
[31mInvoking script with the following command:
[0m
[31m/usr/bin/python -m train --epochs 10 --hidden_dim 200
[0m
[31mUsing device cuda.[0m
[31mGet train data loader.[0m
###Markdown
Step 5: Testing the modelAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly. Step 6: Deploy the model for testingNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.In other words **If you are no longer using a deployed endpoint, shut it down!****TODO:** Deploy the trained model.
###Code
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.p2.xlarge')
###Output
----------------------------------------------------------------------------------------------------!
###Markdown
Step 7 - Use the model for testingOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
###Code
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis? **Answer:** This model differs from the XGBoost model as XGBoost is a linear regression model which uses a tree-based system to improved model performance by creating models, discarding poor performing models, and following iterations of better performing models until accuracy levels out. The Recurrent Neural Network (RNN) is a completely different type of model that better fits the needs for natural language processing. An RNN passes previous information learned in the model and stored in the hidden state into the next iteration. This allows a model to make informed guesses as to what should come next rather than starting over with each prediction as the XGBoost linear regression model would. I believe that the RNN model would perform better for sentiment analysis as we are analyzing language and linear regression does not fit well in this context due to the clustering nature of a language's vocabulary. Linear regression could be used, but the results will probably not be useful. (TODO) More testingWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
###Code
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
###Output
_____no_output_____
###Markdown
The question we now need to answer is, how do we send this review to our model?Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews. - Removed any html tags and stemmed the input - Encoded the review as a sequence of integers using `word_dict` In order process the review we will need to repeat these two steps.**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
###Code
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = [np.array(convert_and_pad(word_dict, review_to_words(test_review))[0])]
###Output
_____no_output_____
###Markdown
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
###Code
predictor.predict(test_data)
###Output
_____no_output_____
###Markdown
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive. Delete the endpointOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
###Code
estimator.delete_endpoint()
###Output
_____no_output_____ |
thinkful/data_science/my_progress/unit_2_supervised_learning/Unit_2_-_Lesson_3_-_Drill_-_Confusion_Matrix.ipynb | ###Markdown
Drill - Confusion MatrixIt's worth calculating these with code so that you fully understand how these statistics work, so here is your task for the cell below. Manually generate (meaning don't use the SKLearn function) your own confusion matrix and print it along with the sensitivity and specificity.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy
import sklearn
import seaborn as sns
%matplotlib inline
# Grab and process the raw data.
data_path = ("https://raw.githubusercontent.com/Thinkful-Ed/data-201-resources/"
"master/sms_spam_collection/SMSSpamCollection"
)
sms_raw = pd.read_csv(data_path, delimiter= '\t', header=None)
sms_raw.columns = ['spam', 'message']
# Enumerate our spammy keywords.
keywords = ['click', 'offer', 'winner', 'buy', 'free', 'cash', 'urgent']
for key in keywords:
sms_raw[str(key)] = sms_raw.message.str.contains(
' ' + str(key) + ' ',
case=False
)
sms_raw['allcaps'] = sms_raw.message.str.isupper()
sms_raw['spam'] = (sms_raw['spam'] == 'spam')
data = sms_raw[keywords + ['allcaps']]
target = sms_raw['spam']
sns.heatmap(sms_raw.corr())
from sklearn.naive_bayes import BernoulliNB
bnb = BernoulliNB()
y_pred = bnb.fit(data, target).predict(data)
print("Number of mislabeled points out of a total {} points : {}".format(
data.shape[0],
(target != y_pred).sum()
))
from sklearn.metrics import confusion_matrix
confusion_matrix(target, y_pred)
###Output
_____no_output_____
###Markdown
DRILL:It's worth calculating these with code so that you fully understand how these statistics work, so here is your task for the cell below. Manually generate (meaning don't use the SKLearn function) your own confusion matrix and print it along with the sensitivity and specificity.
###Code
# Build your confusion matrix and calculate sensitivity and specificity here.
tp = 0
fp = 0
fn = 0
tn = 0
for i in range(len(y_pred)):
if y_pred[i] == False and target[i] == False:
tp += 1
elif y_pred[i] == True and target[i] == False:
fp += 1
elif y_pred[i] == False and target[i] == True:
fn += 1
elif y_pred[i] == True and target[i] == True:
tn += 1
confusion_mat = np.array([tp, fp, fn, tn]).reshape(2,2)
confusion_mat
# sensitivity = pct of spams correctly identified
spam_sensitivity = tn / (fn + tn)
spam_sensitivity
# specificity = pct of hams correctly identified
ham_specificity = tp / (tp + fp)
ham_specificity
###Output
_____no_output_____ |
Project 2/Project_2_SVD_2_Classes.ipynb | ###Markdown
Creating a TFxIDF vector representation of training and test data A CountVectorizer object is created to first build a vocabulary of words and their respective counts, and then transform the sentences by replacing the words with their respective counts. A TfIdfTransformer object is created and then fit on the training dataset to then extract TfIdf features from each document in both training and test datasets.
###Code
#vectorizer = CountVectorizer(min_df = 3)
#tfidf_transformer = TfidfTransformer()
#vec_train_x = vectorizer.fit_transform(preproc_train_data)
#tfidf_train_x = tfidf_transformer.fit_transform(vec_train_x)
vectorizer = TfidfVectorizer(min_df=3, stop_words='english')
tfidf_train_x = vectorizer.fit_transform(preproc_train_data)
print('Dimensions of the TFIDF matrix are: ' + str(tfidf_train_x.shape))
km = KMeans(n_clusters=2, init='k-means++', max_iter=100, n_init=10, random_state=35)
km.fit(tfidf_train_x)
contingency_matrix(train_y, km.labels_)
print("Homogeneity: %0.3f" % homogeneity_score(train_y, km.labels_))
print("Completeness: %0.3f" % completeness_score(train_y, km.labels_))
print("V-measure: %0.3f" % v_measure_score(train_y, km.labels_))
print("Adjusted Rand-Index: %.3f"
% adjusted_rand_score(train_y, km.labels_))
print("Adjusted Mutual Info: %.3f"
% adjusted_mutual_info_score(train_y, km.labels_))
svd_model = TruncatedSVD(n_components=1000, random_state=0)
train_x2 = svd_model.fit_transform(tfidf_train_x)
r_values = [1, 2, 3, 5, 10, 20, 50, 100, 300]
homogeneity = []
completeness = []
v_mes = []
rand_ind = []
mutual_info = []
for r in r_values:
km.fit(train_x2[:,:r])
homogeneity.append(homogeneity_score(train_y, km.labels_))
completeness.append(completeness_score(train_y, km.labels_))
v_mes.append(v_measure_score(train_y, km.labels_))
rand_ind.append(adjusted_rand_score(train_y, km.labels_))
mutual_info.append(adjusted_mutual_info_score(train_y, km.labels_))
fig = plt.figure()
plt.plot(r_values, homogeneity, label='Homogeneity Values')
plt.plot(r_values, completeness, label='Completeness Values')
plt.plot(r_values, v_mes, label='V Measure Values')
plt.plot(r_values, rand_ind, label='Adjusted Rand Values')
plt.plot(r_values, mutual_info, label='Adjusted Mutual Info Values')
fig.suptitle('Measure variations vs r')
pylab.xlabel('r value')
pylab.ylabel('Score values')
plt.legend(['Homogeneity', 'Completeness', 'Measure', 'Adjusted Rand', 'Adjusted Mutual Info'], loc='upper right')
plt.show()
best_train = train_x2[:,:2]
km.fit(best_train)
print("Homogeneity: %0.3f" % homogeneity_score(train_y, km.labels_))
print("Completeness: %0.3f" % completeness_score(train_y, km.labels_))
print("V-measure: %0.3f" % v_measure_score(train_y, km.labels_))
print("Adjusted Rand-Index: %.3f"
% adjusted_rand_score(train_y, km.labels_))
print("Adjusted Mutual Info: %.3f"
% adjusted_mutual_info_score(train_y, km.labels_))
contingency_matrix(train_y, km.labels_)
y_kmeans = km.predict(best_train)
fig = plt.figure()
plt.scatter(train_x2[:,:2][:,0], train_x2[:,:2][:,1], c=y_kmeans)
plt.show()
###Output
_____no_output_____ |
maximizing-total-offspring.ipynb | ###Markdown
Maximizing equilibrium total offspring
###Code
x1, x2, x3, x4 = sym.symbols('x1, x2, x3, x4', real=True, nonnegative=True)
T, R, P, S = sym.symbols('T, R, P, S', real=True, positive=True)
M, m = sym.symbols("M, m", real=True, nonnegative=True)
epsilon = sym.symbols("epsilon", real=True, nonnegative=True)
UGA = symbolics.UGA
UgA = symbolics.UgA
x = np.array([[x1], [x2], [x3], [1 - x1 - x2 - x3]])
payoff_kernel = np.array([[R, S], [T, P]])
W = models.generalized_sexual_selection(x, UGA, UgA, payoff_kernel, M, m, epsilon)
N, = models.total_offspring(W, x)
sym.factor(sym.cancel(sym.together(sym.expand(N))), UGA(x1 + x3), UgA(x1 + x3), x1, x2)
###Output
_____no_output_____
###Markdown
Total offspring in the two-locus model is a convex combination of total offspring in the one-locus model where all females carry G-allele of the $\gamma$ gene and the one-locus model where all females carry the g-allele of the $\gamma$ gene.\begin{align}N(x_G, x_A; U_G, G_g) =& x_G \Bigg(2\bigg(\big((R + P) - (T + S)\big)U_G(x_A)^2 + \big((T + S) - 2P\big)U_G(x_A) + (P - M)\bigg)\Bigg) + (1 - x_G)\Bigg(2\bigg(\big((R + P) - (T + S)\big)U_g(x_A)^2 + \big((T + S) - 2P\big)U_g(x_A) + (P - m)\bigg)\Bigg) \\=& x_G N(x_A; U_G) + (1 - x_G)N(x_A, U_g)\end{align}Note that the function for total offspring is linear in $x_G$. This fact implies that equilibrium total offspring will be maximized at either $x_G*=0$ or $x_G^*=1$, depending on parameters. Thus any stable, fully polymorphic equilibrium will *not* maximize total offspring in equilibrium. Substitute equilibrium values
###Code
UGA_star, UgA_star, xG_star = sym.symbols("UGA_star, UgA_star, xG_star")
equilibrium_total_offspring = N.subs({UGA(x1+x3): UGA_star, UgA(x1+x3): UgA_star, x1: xG_star - x2}).simplify()
sym.factor(sym.cancel(equilibrium_total_offspring.subs({xG_star: 0, m: 0})), UgA_star)
###Output
_____no_output_____
###Markdown
$$ N\bigg(\frac{1}{2\epsilon + 1}, 1, Ug\bigg) = \frac{2(R - M) + 4\epsilon\bigg(Ug^2\big((R + P) - (T + S)\big) + Ug\big((T + S) - 2P\big) + P\bigg)}{2\epsilon + 1} $$ Make an interactive plot
###Code
_equilibrium_total_offspring = sym.lambdify((xG_star, UGA_star, UgA_star, T, R, P, S, M, m),
equilibrium_total_offspring,
modules="numpy")
def plot_total_offspring(xG_star, T, R, P, S, M):
fig, ax = plt.subplots(1, 1, figsize=(20, 10))
equilibrium_selection_probs = np.linspace(0, 1, 100).reshape(-1, 1)
UGAs = equilibrium_selection_probs.reshape(-1, 1)
UgAs = equilibrium_selection_probs.reshape(1, -1)
Z = _equilibrium_total_offspring(xG_star, UGAs, UgAs, T, R, P, S, M, 0)
cax = ax.imshow(Z, origin="lower")
contours = ax.contour(Z, colors='w', origin='lower')
ax.clabel(contours, contours.levels, inline=True, fontsize=10)
ax.set_ylabel(r"$U_{GA}^*$", fontsize=20, rotation="horizontal")
ax.set_xlabel(r"$U_{gA}^*$", fontsize=20)
ax.set_title(r"Equilibrium max total offspring for $x_G^*$={} is {}".format(xG_star, Z.max()), fontsize=25)
ax.grid(False)
# adjust the tick labels
locs, _ = plt.xticks()
plt.xticks(locs[1:], np.linspace(0, 1, locs.size-1))
locs, _ = plt.yticks()
plt.yticks(locs[1:], np.linspace(0, 1, locs.size-1))
plt.show()
xG_slider = widgets.FloatSlider(value=0.5, min=0, max=1, step=0.01, description=r"$x_G^*$")
# sliders used to control the Prisoner's Dilemma Payoffs
T_slider = widgets.FloatSlider(value=10, min=0, max=100, step=0.1, description=r"$T$")
R_slider = widgets.FloatSlider(value=3, min=0, max=100, step=0.1, description=r"$R$")
P_slider = widgets.FloatSlider(value=2, min=0, max=100, step=0.1, description=r"$P$")
S_slider = widgets.FloatSlider(value=1, min=0, max=100, step=0.1, description=r"$S$")
M_slider = widgets.FloatSlider(value=0, min=0, max=100, step=0.1, description=r"$M$")
w = widgets.interactive(plot_total_offspring, xG_star=xG_slider, T=T_slider, R=R_slider, P=P_slider, S=S_slider, M=M_slider)
display(w)
###Output
_____no_output_____
###Markdown
Find the optimal values of $x_G^*, U_{GA}^*, U_{gA}^*$.The number of total offspring can be written as a function of the equilibrium selection probability.$$ N\big(x_G^*, U_{GA}^*, U_{gA}^*\big) = 2\bigg(\big((R + P) - (T + S)\big)x_G^*U_{GA}^{*2} + \big((T + S) - 2P\big)x_G^*U_{GA}^* + P - \big((R + P) - (T + S)\big)x_G^*U_{gA}^{*2} - \big((T + S) - 2P\big)x_G^*U_{gA}^* + \big((R + P) - (T + S)\big)U_{gA}^{*2} + \big((T + S) - 2P\big)U_{gA}^* \bigg)$$To find the equilibrium selection probability that maximizes the number of total offspring we need to solve the following constrained optimization problem.$$ \max_{x_G^*, U_{GA}^*, U_{gA}^*}\ N\big(x_G^*, U_{GA}^*, U_{gA}^*\big) $$subject to the following inequality constraints.\begin{align} -x_G^* \le& 0 \\ x_G^* - 1 \le& 0 \\ -U_{GA}^* \le& 0 \\ U_{GA}^* - 1 \le& 0 \\ -U_{gA}^* \le& 0 \\ U_{gA}^* - 1 \le& 0\end{align}First-order conditions are as follows.\begin{align} 2\bigg(\big((R + P) - (T + S)\big)U_{GA}^{*2} + \big((T + S) - 2P\big)U_{GA}^* + P - \big((R + P) - (T + S)\big)U_{gA}^{*2} - \big((T + S) - 2P\big)U_{gA}^*\bigg) =& -\mu_{x_G^*, 0} + \mu_{x_G^*,1} \\ 2\bigg(2\big((R + P) - (T + S)\big)x_G^*U_{GA}^* + \big((T + S) - 2P\big)x_G^*\bigg) =& -\mu_{U_{GA}^*, 0} + \mu_{U_{GA}^*,1} \\ 2\bigg(-2\big((R + P) - (T + S)\big)x_G^*U_{gA}^* - \big((T + S) - 2P\big)x_G^* + 2\big((R + P) - (T + S)\big)U_{gA}^* + \big((T + S) - 2P\big) \bigg) =& -\mu_{U_{gA}^*, 0} + \mu_{U_{gA}^*,1}\end{align}Complementary slackness conditions are\begin{align} -\mu_{x_G^*,0}x_G^* =& 0 \\ \mu_{x_G^*,1}\big(x_G^* - 1\big) =& 0 \\ -\mu_{U_{GA}^*,0}U_{GA}^* =& 0 \\ \mu_{U_{GA}^*,1}\big(U_{GA}^* - 1\big) =& 0 -\mu_{U_{gA}^*,0}U_{gA}^* =& 0 \\ \mu_{U_{gA}^*,1}\big(U_{gA}^* - 1\big) =& 0\end{align}where $\mu_0, \mu_1$ are Lagrange multipliers. Case 1: interior equilibrium $(0 < x_G^* < 1, 0 < U_{GA}^* < 1, 0 < U_{gA}^* < 1)$In an interior equilibrium, complementary slackness conditions imply that all Lagrange multipliers are zero (i.e., $\mu_{x_G^*, 0} =\mu_{x_G^*, 1} =\mu_{U_{GA}^*, 0} =\mu_{U_{GA}^*, 0} =\mu_{U_{gA}^*, 0} =\mu_{U_{gA}^*, 0}=0$).Our first order conditions reduce to the following.\begin{align} 2\bigg(\big((R + P) - (T + S)\big)U_{GA}^{*2} + \big((T + S) - 2P\big)U_{GA}^* + P - \big((R + P) - (T + S)\big)U_{gA}^{*2} - \big((T + S) - 2P\big)U_{gA}^*\bigg) =& 0 \\ 2\bigg(2\big((R + P) - (T + S)\big)x_G^*U_{GA}^* + \big((T + S) - 2P\big)x_G^*\bigg) =& 0 \\ 2\bigg(-2\big((R + P) - (T + S)\big)x_G^*U_{gA}^* - \big((T + S) - 2P\big)x_G^* + 2\big((R + P) - (T + S)\big)U_{gA}^* + \big((T + S) - 2P\big) \bigg) =& 0\end{align}Rearranging the second first-order condition yields an expression for the optimal value of $U_{GA}^*$.$$ \bar{U}_{GA}^* = \frac{1}{2}\left(\frac{2P - (T + S)}{(R + P) - (T + S)}\right) $$Substituting this result into the first first-order condition and rearranging yields an identical exprssion for the optimal value of $U_{gA}^*$.$$ \bar{U}_{gA}^* = \frac{1}{2}\left(\frac{2P - (T + S)}{(R + P) - (T + S)}\right) $$Substituting this result into the third first-order condition yields a result which implies that the optimal value for $x_G^*$ is indeterminate (i.e., the objective is flat when holding $U_{GA}^*$ and $U_{gA}^*$ fixed).
###Code
first_order_conditions = sym.Matrix([equilibrium_total_offspring.diff(xG_star, 1),
equilibrium_total_offspring.diff(UGA_star, 1),
equilibrium_total_offspring.diff(UgA_star, 1)])
optimal_UGA_star, = sym.solve(first_order_conditions[1,0], UGA_star)
optimal_UgA_star, = sym.solve(first_order_conditions[0,0].subs({UGA_star: optimal_UGA_star}), UgA_star)
# optimal value for xG_star is indeterminate!
sym.simplify(first_order_conditions[2,0].subs({UGA_star: optimal_UGA_star, UgA_star: optimal_UgA_star}))
jacobian = first_order_conditions.jacobian([xG_star, UGA_star, UgA_star])
simplified_jacobian = sym.simplify(jacobian.subs({UGA_star: optimal_UGA_star, UgA_star: optimal_UgA_star}))
e1, e2, e3 = (simplified_jacobian.eigenvals()
.keys())
e1
e2
e3
###Output
_____no_output_____
###Markdown
Requirement for total offspring to optimal a local maximum at the above values derived above is for the Hessian to be negative semi-definite. This requirement will be satisfied if and only if $$ R + P < T + S. $$ Case 2: equilibrium with $\bar{x}_G^*=1$, $(0 < U_{GA}^* < 1, 0 < U_{gA}^* < 1)$In this equilibrium, complementary slackness conditions imply that all Lagrange multipliers are zero (i.e., $\mu_{x_G^*, 0} =\mu_{U_{GA}^*, 0} =\mu_{U_{GA}^*, 0} =\mu_{U_{gA}^*, 0} =\mu_{U_{gA}^*, 0}=0$) except $\mu_{x_G^*, 1} > 0$.Our first order conditions reduce to the following.\begin{align} 2\bigg(\big((R + P) - (T + S)\big)U_{GA}^{*2} + \big((T + S) - 2P\big)U_{GA}^* + P - \big((R + P) - (T + S)\big)U_{gA}^{*2} - \big((T + S) - 2P\big)U_{gA}^*\bigg) =& \mu_{x_G^*, 1} \\ 2\bigg(2\big((R + P) - (T + S)\big)U_{GA}^* + \big((T + S) - 2P\big)\bigg) =& 0 \\ 2\bigg(-2\big((R + P) - (T + S)\big)U_{gA}^* - \big((T + S) - 2P\big) + 2\big((R + P) - (T + S)\big)U_{gA}^* + \big((T + S) - 2P\big) \bigg) =& 0\end{align}Rearranging the second first-order condition yields an expression for the optimal value of $U_{GA}^*$.$$ \bar{U}_{GA}^* = \frac{1}{2}\left(\frac{2P - (T + S)}{(R + P) - (T + S)}\right) $$Substituting this optimal value of $U_{GA}^*$ into the first first-order condition and rearranging we find that the inequality will hold so long as $$ \big((R + P) - (T + S)\big)\big(U_{gA}^* - \bar{U}_{GA}^*\big)^2 + P > 0 $$which requires $R + P > T + S$. Finally, rearranging the third first-order condition implies that the optimal value for $U_{gA}^*$ is indeterminate: so long as $R + P > T + S$, then $\bar{x}_G^*=1$ for any value of $U_{gA}^*$.
###Code
def interior_optimal_UGA(T, R, P, S):
return 0.5 * ((2 * P - (T + S)) / ((R + P) - (T + S)))
def interior_optimal_UgA(T, R, P, S):
return interior_optimal_UGA(T, R, P, S)
def _mu_xG_1(UGA, UgA, T, R, P, S):
multiplier = 2 * (((R + P) - (T + S)) * UGA**2 + ((T + S) - 2 * P) * UGA + P -
((R + P) - (T + S)) * UgA**2 - ((T + S) - 2 * P) * UgA)
return multiplier
def max_total_fitness(T, R, P, S):
if _mu_xG_1(UGA, UgA, T, R, P, S) > 0:
pass # max at xG=1
elif _mu_xG_1(UGA, UgA, T, R, P, S) < 0:
pass # max at xG=0
else:
objective = lambda x: -_equilibrium_total_offspring(x[0], x[1], x[2], 25, 3, 2, 1)
x0 = 0.5 * np.ones(3)
res = optimize.minimize(objective, x0, bounds=[(0,1), (0,1), (0,1)])
-res.fun
###Output
_____no_output_____ |
docs/_static/notebooks/scaling.ipynb | ###Markdown
Scaling Gaussian Processes to big datasetsThis notebook was made with the following version of george:
###Code
import george
george.__version__
###Output
_____no_output_____
###Markdown
One of the biggest technical challenges faced when using Gaussian Processes to model big datasets is that the computational cost naïvely scales as $\mathcal{O}(N^3)$ where $N$ is the number of points in you dataset. This cost can be prohibitive even for moderately sized datasets. There are a lot of methods for making these types of problems tractable by exploiting structure or making approximations. George comes equipped with one approximate method with controllable precision that works well with one-dimensional inputs (time series, for example). The method comes from [this paper](http://arxiv.org/abs/1403.6015) and it can help speed up many—but not all—Gaussian Process models.To demonstrate this method, in this tutorial, we'll benchmark the two Gaussian Process "solvers" included with george. For comparison, we'll also measure the computational cost of the same operations using the popular [GPy library](https://github.com/SheffieldML/GPy) and the [new scikit-learn interface](https://github.com/scikit-learn/scikit-learn/pull/4270). Note that GPy is designed a Gaussian Process toolkit and it comes with a huge number state-of-the-art algorithms for the application of Gaussian Processes and it is not meant for efficiently computing marginalized likelihoods so the comparison isn't totally fair.As usual, we'll start by generating a large fake dataset:
###Code
import numpy as np
import matplotlib.pyplot as pl
np.random.seed(1234)
x = np.sort(np.random.uniform(0, 10, 50000))
yerr = 0.1 * np.ones_like(x)
y = np.sin(x)
###Output
_____no_output_____
###Markdown
The standard method for computing the marginalized likelihood of this dataset under a GP model is:
###Code
from george import kernels
kernel = np.var(y) * kernels.ExpSquaredKernel(1.0)
gp_basic = george.GP(kernel)
gp_basic.compute(x[:100], yerr[:100])
print(gp_basic.log_likelihood(y[:100]))
###Output
133.946394912
###Markdown
When using only 100 data points, this computation is very fast but we could also use the approximate solver as follows:
###Code
gp_hodlr = george.GP(kernel, solver=george.HODLRSolver, seed=42)
gp_hodlr.compute(x[:100], yerr[:100])
print(gp_hodlr.log_likelihood(y[:100]))
###Output
133.946394912
###Markdown
The new scikit-learn interface is quite similar (you'll need to install a recent version of scikit-learn to execute this cell):
###Code
import sklearn
print("sklearn version: {0}".format(sklearn.__version__))
from sklearn.gaussian_process.kernels import RBF
from sklearn.gaussian_process import GaussianProcessRegressor
kernel_skl = np.var(y) * RBF(length_scale=1.0)
gp_skl = GaussianProcessRegressor(kernel_skl,
alpha=yerr[:100]**2,
optimizer=None,
copy_X_train=False)
gp_skl.fit(x[:100, None], y[:100])
print(gp_skl.log_marginal_likelihood(kernel_skl.theta))
###Output
sklearn version: 0.19.1
133.946394918
###Markdown
To implement this same model in GPy, you would do something like (I've never been able to get the heteroscedastic regression to work in GPy):
###Code
import GPy
print("GPy version: {0}".format(GPy.__version__))
kernel_gpy = GPy.kern.RBF(input_dim=1, variance=np.var(y), lengthscale=1.)
gp_gpy = GPy.models.GPRegression(x[:100, None], y[:100, None], kernel_gpy)
gp_gpy['.*Gaussian_noise'] = yerr[0]**2
print(gp_gpy.log_likelihood())
###Output
GPy version: 1.8.4
133.946345613
###Markdown
Now that we have working implementations of this model using all of the different methods and modules, let's run a benchmark to look at the computational cost and scaling of each option. The code here doesn't matter too much but we'll compute the best-of-"K" runtime for each method where "K" depends on how long I'm willing to wait. This cell takes a few minutes to run.
###Code
import time
ns = np.array([50, 100, 200, 500, 1000, 5000, 10000, 50000], dtype=int)
t_basic = np.nan + np.zeros(len(ns))
t_hodlr = np.nan + np.zeros(len(ns))
t_gpy = np.nan + np.zeros(len(ns))
t_skl = np.nan + np.zeros(len(ns))
for i, n in enumerate(ns):
# Time the HODLR solver.
best = np.inf
for _ in range(100000 // n):
strt = time.time()
gp_hodlr.compute(x[:n], yerr[:n])
gp_hodlr.log_likelihood(y[:n])
dt = time.time() - strt
if dt < best:
best = dt
t_hodlr[i] = best
# Time the basic solver.
best = np.inf
for _ in range(10000 // n):
strt = time.time()
gp_basic.compute(x[:n], yerr[:n])
gp_basic.log_likelihood(y[:n])
dt = time.time() - strt
if dt < best:
best = dt
t_basic[i] = best
# Compare to the proposed scikit-learn interface.
best = np.inf
if n <= 10000:
gp_skl = GaussianProcessRegressor(kernel_skl,
alpha=yerr[:n]**2,
optimizer=None,
copy_X_train=False)
gp_skl.fit(x[:n, None], y[:n])
for _ in range(10000 // n):
strt = time.time()
gp_skl.log_marginal_likelihood(kernel_skl.theta)
dt = time.time() - strt
if dt < best:
best = dt
t_skl[i] = best
# Compare to GPy.
best = np.inf
for _ in range(5000 // n):
kernel_gpy = GPy.kern.RBF(input_dim=1, variance=np.var(y), lengthscale=1.)
strt = time.time()
gp_gpy = GPy.models.GPRegression(x[:n, None], y[:n, None], kernel_gpy)
gp_gpy['.*Gaussian_noise'] = yerr[0]**2
gp_gpy.log_likelihood()
dt = time.time() - strt
if dt < best:
best = dt
t_gpy[i] = best
###Output
_____no_output_____
###Markdown
Finally, here are the results of the benchmark plotted on a logarithmic scale:
###Code
pl.loglog(ns, t_gpy, "-o", label="GPy")
pl.loglog(ns, t_skl, "-o", label="sklearn")
pl.loglog(ns, t_basic, "-o", label="basic")
pl.loglog(ns, t_hodlr, "-o", label="HODLR")
pl.xlim(30, 80000)
pl.ylim(1.1e-4, 50.)
pl.xlabel("number of datapoints")
pl.ylabel("time [seconds]")
pl.legend(loc=2, fontsize=16);
###Output
_____no_output_____ |
docs/examples/frameworks/mxnet/mxnet-external_input.ipynb | ###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
from __future__ import division
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i].split(' ')
f = open(self.images_dir + jpeg_filename, 'rb')
batch.append(np.frombuffer(f.read(), dtype = np.uint8))
labels.append(np.array([label], dtype = np.uint8))
self.i = (self.i + 1) % self.n
return (batch, labels)
@property
def size(self,):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the pipelineNow the pipeline itself will be defined. First of all, a framework iterator will be used so we need to make sure that images and the output of the pipeline are uniforms in size, so resize operator is used. Also, `iter_setup` will raise the StopIteration exception when the AdvancedExternalInputIterator run of data. Worth notice is that iterator needs to be recreated so next time `iter_setup` is called it has ready data to consume.
###Code
class ExternalSourcePipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id, external_data):
super(ExternalSourcePipeline, self).__init__(batch_size,
num_threads,
device_id,
seed=12)
self.input = ops.ExternalSource()
self.input_label = ops.ExternalSource()
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.res = ops.Resize(device="gpu", resize_x=240, resize_y=240)
self.cast = ops.Cast(device = "gpu",
dtype = types.UINT8)
self.external_data = external_data
self.iterator = iter(self.external_data)
def define_graph(self):
self.jpegs = self.input()
self.labels = self.input_label()
images = self.decode(self.jpegs)
images = self.res(images)
output = self.cast(images)
return (output, self.labels)
def iter_setup(self):
try:
(images, labels) = self.iterator.next()
self.feed_input(self.jpegs, images)
self.feed_input(self.labels, labels)
except StopIteration:
self.iterator = iter(self.external_data)
raise StopIteration
###Output
_____no_output_____
###Markdown
Using the pipelineAt the end let us see how it works. Please also notice the usage of `last_batch_padded` that tell iterator that the difference between data set size and batch size alignment is padded by real data that could be skipped at when provided to the framework (`fill_last_batch`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, size=eii.size, last_batch_padded=True, fill_last_batch=False)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
epoch: 0, iter 0, real batch size: 3
epoch: 0, iter 1, real batch size: 3
epoch: 0, iter 2, real batch size: 3
epoch: 0, iter 3, real batch size: 3
epoch: 0, iter 4, real batch size: 3
epoch: 0, iter 5, real batch size: 3
epoch: 0, iter 6, real batch size: 3
epoch: 1, iter 0, real batch size: 3
epoch: 1, iter 1, real batch size: 3
epoch: 1, iter 2, real batch size: 3
epoch: 1, iter 3, real batch size: 3
epoch: 1, iter 4, real batch size: 3
epoch: 1, iter 5, real batch size: 3
epoch: 1, iter 6, real batch size: 3
epoch: 2, iter 0, real batch size: 3
epoch: 2, iter 1, real batch size: 3
epoch: 2, iter 2, real batch size: 3
epoch: 2, iter 3, real batch size: 3
epoch: 2, iter 4, real batch size: 3
epoch: 2, iter 5, real batch size: 3
epoch: 2, iter 6, real batch size: 3
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali as dali
import nvidia.dali.fn as fn
import mxnet
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
self.__iter__()
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i % self.n].split(' ')
batch.append(np.fromfile(self.images_dir + jpeg_filename, dtype = np.uint8)) # we can use numpy
labels.append(mxnet.ndarray.array([int(label)], dtype = 'uint8')) # or MXNet native arrays
self.i += 1
return (batch, labels)
def __len__(self):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the pipelineNow let's define our pipeline. We need an instance of ``Pipeline`` class and some operators which will define the processing graph. Our external source provides 2 outpus which we can conveniently unpack by specifying ``num_outputs=2`` in the external source operator.
###Code
def ExternalSourcePipeline(batch_size, num_threads, device_id, external_data):
pipe = Pipeline(batch_size, num_threads, device_id)
with pipe:
jpegs, labels = fn.external_source(source=external_data, num_outputs=2)
images = fn.image_decoder(jpegs, device="mixed")
images = fn.resize(images, resize_x=240, resize_y=240)
output = fn.cast(images, dtype=dali.types.UINT8)
pipe.set_outputs(output, labels)
return pipe
###Output
_____no_output_____
###Markdown
Using the pipelineIn the end, let us see how it works.`last_batch_padded` and `last_batch_policy` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between dataset size and batch size alignment is padded by real data that could be skipped when provided to the framework (`last_batch_policy`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
from nvidia.dali.plugin.mxnet import LastBatchPolicy
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, last_batch_padded=True, last_batch_policy=LastBatchPolicy.PARTIAL)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
epoch: 0, iter 0, real batch size: 3
epoch: 0, iter 1, real batch size: 3
epoch: 0, iter 2, real batch size: 3
epoch: 0, iter 3, real batch size: 3
epoch: 0, iter 4, real batch size: 3
epoch: 0, iter 5, real batch size: 3
epoch: 0, iter 6, real batch size: 3
epoch: 1, iter 0, real batch size: 3
epoch: 1, iter 1, real batch size: 3
epoch: 1, iter 2, real batch size: 3
epoch: 1, iter 3, real batch size: 3
epoch: 1, iter 4, real batch size: 3
epoch: 1, iter 5, real batch size: 3
epoch: 1, iter 6, real batch size: 3
epoch: 2, iter 0, real batch size: 3
epoch: 2, iter 1, real batch size: 3
epoch: 2, iter 2, real batch size: 3
epoch: 2, iter 3, real batch size: 3
epoch: 2, iter 4, real batch size: 3
epoch: 2, iter 5, real batch size: 3
epoch: 2, iter 6, real batch size: 3
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali as dali
import nvidia.dali.fn as fn
import mxnet
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i].split(' ')
batch.append(np.fromfile(self.images_dir + jpeg_filename, dtype = np.uint8)) # we can use numpy
labels.append(mxnet.ndarray.array([int(label)], dtype = 'uint8')) # or MXNet native arrays
self.i = (self.i + 1) % self.n
return (batch, labels)
def __len__(self):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the pipelineNow let's define our pipeline. We need an instance of ``Pipeline`` class and some operators which will define the processing graph. Our external source provides 2 outpus which we can conveniently unpack by specifying ``num_outputs=2`` in the external source operator.
###Code
def ExternalSourcePipeline(batch_size, num_threads, device_id, external_data):
pipe = Pipeline(batch_size, num_threads, device_id)
with pipe:
jpegs, labels = fn.external_source(source=external_data, num_outputs=2)
images = fn.image_decoder(jpegs, device="mixed")
images = fn.resize(images, resize_x=240, resize_y=240)
output = fn.cast(images, dtype=dali.types.UINT8)
pipe.set_outputs(output, labels)
return pipe
###Output
_____no_output_____
###Markdown
Using the pipelineIn the end, let us see how it works.`last_batch_padded` and `last_batch_policy` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between dataset size and batch size alignment is padded by real data that could be skipped when provided to the framework (`last_batch_policy`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
from nvidia.dali.plugin.mxnet import LastBatchPolicy
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, size=len(eii), last_batch_padded=True, last_batch_policy=LastBatchPolicy.PARTIAL)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
epoch: 0, iter 0, real batch size: 3
epoch: 0, iter 1, real batch size: 3
epoch: 0, iter 2, real batch size: 3
epoch: 0, iter 3, real batch size: 3
epoch: 0, iter 4, real batch size: 3
epoch: 0, iter 5, real batch size: 3
epoch: 0, iter 6, real batch size: 3
epoch: 1, iter 0, real batch size: 3
epoch: 1, iter 1, real batch size: 3
epoch: 1, iter 2, real batch size: 3
epoch: 1, iter 3, real batch size: 3
epoch: 1, iter 4, real batch size: 3
epoch: 1, iter 5, real batch size: 3
epoch: 1, iter 6, real batch size: 3
epoch: 2, iter 0, real batch size: 3
epoch: 2, iter 1, real batch size: 3
epoch: 2, iter 2, real batch size: 3
epoch: 2, iter 3, real batch size: 3
epoch: 2, iter 4, real batch size: 3
epoch: 2, iter 5, real batch size: 3
epoch: 2, iter 6, real batch size: 3
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i].split(' ')
f = open(self.images_dir + jpeg_filename, 'rb')
batch.append(np.frombuffer(f.read(), dtype = np.uint8))
labels.append(np.array([label], dtype = np.uint8))
self.i = (self.i + 1) % self.n
return (batch, labels)
@property
def size(self,):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the pipelineNow the pipeline itself will be defined. First of all, a framework iterator will be used so we need to make sure that images and the output of the pipeline are uniforms in size, so resize operator is used. Also, `iter_setup` will raise the StopIteration exception when the AdvancedExternalInputIterator run of data. Worth notice is that iterator needs to be recreated so next time `iter_setup` is called it has ready data to consume.
###Code
class ExternalSourcePipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id, external_data):
super(ExternalSourcePipeline, self).__init__(batch_size,
num_threads,
device_id,
seed=12)
self.input = ops.ExternalSource()
self.input_label = ops.ExternalSource()
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.res = ops.Resize(device="gpu", resize_x=240, resize_y=240)
self.cast = ops.Cast(device = "gpu",
dtype = types.UINT8)
self.external_data = external_data
self.iterator = iter(self.external_data)
def define_graph(self):
self.jpegs = self.input()
self.labels = self.input_label()
images = self.decode(self.jpegs)
images = self.res(images)
output = self.cast(images)
return (output, self.labels)
def iter_setup(self):
try:
(images, labels) = self.iterator.next()
self.feed_input(self.jpegs, images)
self.feed_input(self.labels, labels)
except StopIteration:
self.iterator = iter(self.external_data)
raise StopIteration
###Output
_____no_output_____
###Markdown
Using the pipelineIn the end, let us see how it works.`last_batch_padded` and `last_batch_policy` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between dataset size and batch size alignment is padded by real data that could be skipped when provided to the framework (`last_batch_policy`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
from nvidia.dali.plugin.mxnet import LastBatchPolicy
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, size=eii.size, last_batch_padded=True, last_batch_policy=LastBatchPolicy.PARTIAL)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
/usr/local/lib/python3.6/dist-packages/nvidia/dali/plugin/base_iterator.py:124: Warning: Please set `reader_name` and don't set last_batch_padded and size manually whenever possible. This may lead, in some situations, to miss some samples or return duplicated ones. Check the Sharding section of the documentation for more details.
_iterator_deprecation_warning()
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali as dali
import nvidia.dali.fn as fn
import mxnet
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the Iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
self.__iter__()
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i % self.n].split(' ')
batch.append(np.fromfile(self.images_dir + jpeg_filename, dtype = np.uint8)) # we can use numpy
labels.append(mxnet.ndarray.array([int(label)], dtype = 'uint8')) # or MXNet native arrays
self.i += 1
return (batch, labels)
def __len__(self):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the PipelineNow let's define our pipeline. We need an instance of ``Pipeline`` class and some operators which will define the processing graph. Our external source provides 2 outpus which we can conveniently unpack by specifying ``num_outputs=2`` in the external source operator.
###Code
def ExternalSourcePipeline(batch_size, num_threads, device_id, external_data):
pipe = Pipeline(batch_size, num_threads, device_id)
with pipe:
jpegs, labels = fn.external_source(source=external_data, num_outputs=2, dtype=dali.types.UINT8)
images = fn.decoders.image(jpegs, device="mixed")
images = fn.resize(images, resize_x=240, resize_y=240)
output = fn.cast(images, dtype=dali.types.UINT8)
pipe.set_outputs(output, labels)
return pipe
###Output
_____no_output_____
###Markdown
Using the PipelineIn the end, let us see how it works.`last_batch_padded` and `last_batch_policy` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between dataset size and batch size alignment is padded by real data that could be skipped when provided to the framework (`last_batch_policy`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
from nvidia.dali.plugin.mxnet import LastBatchPolicy
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, last_batch_padded=True, last_batch_policy=LastBatchPolicy.PARTIAL)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
epoch: 0, iter 0, real batch size: 3
epoch: 0, iter 1, real batch size: 3
epoch: 0, iter 2, real batch size: 3
epoch: 0, iter 3, real batch size: 3
epoch: 0, iter 4, real batch size: 3
epoch: 0, iter 5, real batch size: 3
epoch: 0, iter 6, real batch size: 3
epoch: 1, iter 0, real batch size: 3
epoch: 1, iter 1, real batch size: 3
epoch: 1, iter 2, real batch size: 3
epoch: 1, iter 3, real batch size: 3
epoch: 1, iter 4, real batch size: 3
epoch: 1, iter 5, real batch size: 3
epoch: 1, iter 6, real batch size: 3
epoch: 2, iter 0, real batch size: 3
epoch: 2, iter 1, real batch size: 3
epoch: 2, iter 2, real batch size: 3
epoch: 2, iter 3, real batch size: 3
epoch: 2, iter 4, real batch size: 3
epoch: 2, iter 5, real batch size: 3
epoch: 2, iter 6, real batch size: 3
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali as dali
import nvidia.dali.fn as fn
import mxnet
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the Iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
self.__iter__()
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i % self.n].split(' ')
batch.append(np.fromfile(self.images_dir + jpeg_filename, dtype = np.uint8)) # we can use numpy
labels.append(mxnet.ndarray.array([int(label)], dtype = 'uint8')) # or MXNet native arrays
self.i += 1
return (batch, labels)
def __len__(self):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the PipelineNow let's define our pipeline. We need an instance of ``Pipeline`` class and some operators which will define the processing graph. Our external source provides 2 outpus which we can conveniently unpack by specifying ``num_outputs=2`` in the external source operator.
###Code
def ExternalSourcePipeline(batch_size, num_threads, device_id, external_data):
pipe = Pipeline(batch_size, num_threads, device_id)
with pipe:
jpegs, labels = fn.external_source(source=external_data, num_outputs=2)
images = fn.decoders.image(jpegs, device="mixed")
images = fn.resize(images, resize_x=240, resize_y=240)
output = fn.cast(images, dtype=dali.types.UINT8)
pipe.set_outputs(output, labels)
return pipe
###Output
_____no_output_____
###Markdown
Using the PipelineIn the end, let us see how it works.`last_batch_padded` and `last_batch_policy` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between dataset size and batch size alignment is padded by real data that could be skipped when provided to the framework (`last_batch_policy`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
from nvidia.dali.plugin.mxnet import LastBatchPolicy
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, last_batch_padded=True, last_batch_policy=LastBatchPolicy.PARTIAL)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
epoch: 0, iter 0, real batch size: 3
epoch: 0, iter 1, real batch size: 3
epoch: 0, iter 2, real batch size: 3
epoch: 0, iter 3, real batch size: 3
epoch: 0, iter 4, real batch size: 3
epoch: 0, iter 5, real batch size: 3
epoch: 0, iter 6, real batch size: 3
epoch: 1, iter 0, real batch size: 3
epoch: 1, iter 1, real batch size: 3
epoch: 1, iter 2, real batch size: 3
epoch: 1, iter 3, real batch size: 3
epoch: 1, iter 4, real batch size: 3
epoch: 1, iter 5, real batch size: 3
epoch: 1, iter 6, real batch size: 3
epoch: 2, iter 0, real batch size: 3
epoch: 2, iter 1, real batch size: 3
epoch: 2, iter 2, real batch size: 3
epoch: 2, iter 3, real batch size: 3
epoch: 2, iter 4, real batch size: 3
epoch: 2, iter 5, real batch size: 3
epoch: 2, iter 6, real batch size: 3
###Markdown
ExternalSource operatorIn this example, we will see how to use `ExternalSource` operator with MXNet DALI iterator, that allows us touse an external data source as an input to the Pipeline.In order to achieve that, we have to define a Iterator or Generator class which `next` function willreturn one or several `numpy` arrays.
###Code
import types
import collections
import numpy as np
from random import shuffle
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
batch_size = 3
epochs = 3
###Output
_____no_output_____
###Markdown
Defining the iterator
###Code
class ExternalInputIterator(object):
def __init__(self, batch_size, device_id, num_gpus):
self.images_dir = "../../data/images/"
self.batch_size = batch_size
with open(self.images_dir + "file_list.txt", 'r') as f:
self.files = [line.rstrip() for line in f if line is not '']
# whole data set size
self.data_set_len = len(self.files)
# based on the device_id and total number of GPUs - world size
# get proper shard
self.files = self.files[self.data_set_len * device_id // num_gpus:
self.data_set_len * (device_id + 1) // num_gpus]
self.n = len(self.files)
def __iter__(self):
self.i = 0
shuffle(self.files)
return self
def __next__(self):
batch = []
labels = []
if self.i >= self.n:
raise StopIteration
for _ in range(self.batch_size):
jpeg_filename, label = self.files[self.i].split(' ')
f = open(self.images_dir + jpeg_filename, 'rb')
batch.append(np.frombuffer(f.read(), dtype = np.uint8))
labels.append(np.array([label], dtype = np.uint8))
self.i = (self.i + 1) % self.n
return (batch, labels)
@property
def size(self,):
return self.data_set_len
next = __next__
###Output
_____no_output_____
###Markdown
Defining the pipelineNow the pipeline itself will be defined. First of all, a framework iterator will be used so we need to make sure that images and the output of the pipeline are uniforms in size, so resize operator is used. Also, `iter_setup` will raise the StopIteration exception when the AdvancedExternalInputIterator run of data. Worth notice is that iterator needs to be recreated so next time `iter_setup` is called it has ready data to consume.
###Code
class ExternalSourcePipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id, external_data):
super(ExternalSourcePipeline, self).__init__(batch_size,
num_threads,
device_id,
seed=12)
self.input = ops.ExternalSource()
self.input_label = ops.ExternalSource()
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.res = ops.Resize(device="gpu", resize_x=240, resize_y=240)
self.cast = ops.Cast(device = "gpu",
dtype = types.UINT8)
self.external_data = external_data
self.iterator = iter(self.external_data)
def define_graph(self):
self.jpegs = self.input()
self.labels = self.input_label()
images = self.decode(self.jpegs)
images = self.res(images)
output = self.cast(images)
return (output, self.labels)
def iter_setup(self):
try:
(images, labels) = self.iterator.next()
self.feed_input(self.jpegs, images)
self.feed_input(self.labels, labels)
except StopIteration:
self.iterator = iter(self.external_data)
raise StopIteration
###Output
_____no_output_____
###Markdown
Using the pipelineIn the end, let us see how it works.`last_batch_padded` and `fill_last_batch` are set here only for the demonstration purposes. The user may write any custom code and change the epoch size epoch to epoch. In that case, it is recommended to set `size` to -1 and let the iterator just wait for StopIteration exception from the `iter_setup`.The `last_batch_padded` here tells the iterator that the difference between data set size and batch size alignment is padded by real data that could be skipped when provided to the framework (`fill_last_batch`):
###Code
from nvidia.dali.plugin.mxnet import DALIClassificationIterator as MXNetIterator
eii = ExternalInputIterator(batch_size, 0, 1)
pipe = ExternalSourcePipeline(batch_size=batch_size, num_threads=2, device_id = 0,
external_data = eii)
pii = MXNetIterator(pipe, size=eii.size, last_batch_padded=True, fill_last_batch=False)
for e in range(epochs):
for i, data in enumerate(pii):
print("epoch: {}, iter {}, real batch size: {}".format(e, i, data[0].data[0].shape[0]))
pii.reset()
###Output
/usr/local/lib/python3.6/dist-packages/nvidia/dali/plugin/base_iterator.py:124: Warning: Please set `reader_name` and don't set last_batch_padded and size manually whenever possible. This may lead, in some situations, to miss some samples or return duplicated ones. Check the Sharding section of the documentation for more details.
_iterator_deprecation_warning()
|